You're currently viewing an old version of this dataset. To see the current version, click here.

EmbraceNet for Activity: A Deep Multimodal Fusion Architecture for Activity Recognition

Human activity recognition using multiple sensors is a challenging but promising task in recent decades. In this paper, we propose a deep multimodal fusion model for activity recognition based on the recently proposed feature fusion architecture named EmbraceNet.

Data and Resources

This dataset has no data

Cite this as

Jun-Ho Choi, Jong-Seok Lee (2024). Dataset: EmbraceNet for Activity: A Deep Multimodal Fusion Architecture for Activity Recognition. https://doi.org/10.57702/pb68s5pp

Private DOI This DOI is not yet resolvable.
It is available for use in manuscripts, and will be published when the Dataset is made public.

Additional Info

Field Value
Created December 16, 2024
Last update December 16, 2024
Defined In https://doi.org/10.1145/3341162.3344871
Author Jun-Ho Choi
More Authors
Jong-Seok Lee
Homepage https://ieeexplore.ieee.org/document/8671111