4th International Workshop on Deep Learning for Human Activity Recognition


Held in conjunction with IJCAI-24, 3rd – 9th August, 2024 in Jeju, Korea

1st workshop 2nd workshop 3rd workshop

Introduction


Human activity recognition (HAR) can be used for a number of applications, such as health-care services and smart home applications. Many sensors have been utilized for human activity recognition, such as wearable sensors, smartphones, radio frequency (RF) sensors (WiFi, RFID), LED light sensors, cameras, etc. Owing to the rapid development of wireless sensor network, a large amount of data has been collected for the recognition of human activities with different kind of sensors. Conventional shallow learning algorithms, such as support vector machine and random forest, require to manually extract some representative features from large and noisy sensory data. However, manual feature engineering requires export knowledge and will inevitably miss implicit features.

Recently, deep learning has achieved great success in many challenging research areas, such as image recognition and natural language processing. The key merit of deep learning is to automatically learn representative features from massive data. This technology can be a good candidate for human activity recognition. Some initial attempts can be found in the literature. However, many challenging research problems in terms of accuracy, device heterogeneous, environment changes, etc. remain unsolved.

This workshop intends to prompt state-of-the-art approaches on deep learning for human activity recognition. The organizers invite researchers to participate and submit their research papers in the Deep Learning for Human Activity Recognition Workshop.

Important Date


May 9, 2024 April 26, 2024 Submission deadline
June 4, 2024 Acceptance notification
August 3-9, 2024 Conference dates

As requested by authors, we will extend the submission to May 9 to give more time for authors. The time zone is the same as the main conference.

Topics


Potential topics include but are not limited to

Foundation models for HAR

Device-based HAR using deep learning

Device-free HAR using deep learning

Image based HAR using deep learning

Light sensor based HAR using deep learning

Sensor fusion for HAR using deep learning

Fusion of shallow models with deep networks for HAR

Device heterogeneous for device-based HAR

Transfer Learning for HAR

Federated Learning for HAR

Reinforcement Learning for HAR

Online Learning for HAR

Self-supervised Learning for HAR

Semi-supervised Learning for HAR

Submission and Registration


Submission Format: The authors should follow IJCAI paper preparation instructions, including page length (e.g. 7 pages + 2 extra page for reference). The paper follows double-blind review.

At least one author of each accepted paper *must* travel to the IJCAI venue in person, and that multiple submissions of the same paper to more IJCAI workshops are forbidden.

Sign up is required for submission. Submission Link.

Papers in this workshop have been published at Springer book series with https://link.springer.com/book/10.1007/978-981-97-9003-6.

Planed Schedule


09:00--09:10
Opening Remarks
by organizers

09:10--10:10
Keynote Presentation: Human Micro-gestures: Data and Analysis
by Prof. Zhao Guoying (University of Oulu, Finland)

10:10—10:30
Oral 1: Real-Time Human Action Prediction via Pose Kinematics
by Niaz ­Ahmad; Saif Ullah; Jawad Khan; Youngmoon Lee

10:30--11:00
Tea Break

11:00--12:00
Keynote Presentation: The impact of foundation models on sensor based Human Activity Recognition
by Prof. Paul Lukowicz, DFKI, Germany

12:00--12:20
Oral 2: Uncertainty Awareness for Unsupervised Domain Adaptation on Human Activity Recognition
by Weide Liu; Xiaoyang Zhong; Lu Wang; Jingwen Hou; Yuemei Luo; Jiebin yan; Yuming Fang

12:20--12:40
Oral 3: Deep Interaction Feature Fusion for Robust Human Activity Recognition
by YongKyung Oh; Sungil Kim; Alex Bui

12:40--14:00
Lunch

14:00--14:20
Oral 4: COMPUTER: Unified Query Machine with Cross-modal Consistency for Human Activity Recognition
by Tuyen Tran; Thao Minh Le; Hung Tran; Truyen Tran

14:20--14:40
Oral 5: How effective are Self-Supervised models for Contact Identification in Videos (Online)
by Malitha Gunawardhana; Limalka Sadith; Liel David; Muhammad Haris; Danny Harari

14:40--15:00
Oral 6: A Wearable Multi-Modal Edge-Computing System for Real-Time Kitchen Activity Recognition
by Mengxi Liu; Sungho Suh; Juan Felipe Vargas; Bo Zhou; Agnes Grünerbl; Paul Lukowicz

15:00--15:10
Closing Remarks
by organizers

Committee


General Chairs

Zhenghua Chen

chen_zhenghua@i2r.a-star.edu.sg

Centre for Frontier AI Research, A*STAR, Singapore

Jianfei Yang

jianfei.yang@ntu.edu.sg

Nanyang Technological University, Singapore

Min Wu

wumin@i2r.a-star.edu.sg

Institute for Infocomm Research, A*STAR, Singapore

Program Committee

Vincent Zheng

Advanced Digital Sciences Center, Singapore

Sinno Pan

The Chinese University of Hong Kong

Keyu Wu

A*STAR, Singapore

Bing Li

University of New South Wale, Australian

Jinming Xu

Zhejiang University, China

Yuecong Xu

National University of Singapore

Han Zou

Microsoft, USA

Wei Cui

A*STAR, Singapore

Lu Xiaoxuan

University of Edinburg, UK

Le Zhang

University of Electronic Science and Technology of China

Paul Lukowicz

DFKI, Germany

Bo Zhou

DFKI, Germany

Vitor Fortes Rey

DFKI, Germany

Michael Beigl

Karlsruhe Institute of Technology

Stephan Sigg

Aalto University

Co-organizer

cannot find