Matches in ScholarlyData for { <https://w3id.org/scholarlydata/inproceedings/lrec2008/papers/472> ?p ?o. }
Showing items 1 to 18 of
18
with 100 items per page.
- 472 creator akira-ozaki.
- 472 creator chiyomi-miyajima.
- 472 creator katunobu-itou.
- 472 creator kazuya-takeda.
- 472 creator norihide-kitaoka.
- 472 creator sunao-hara.
- 472 creator takanori-nishino.
- 472 creator takashi-kusakawa.
- 472 type InProceedings.
- 472 label "In-car Speech Data Collection along with Various Multimodal Signals".
- 472 sameAs 472.
- 472 abstract "In this paper, a large-scale real-world speech database is introduced along with other multimedia driving data. We designed a data collection vehicle equipped with various sensors to synchronously record twelve-channel speech, three-channel video, driving behavior including gas and brake pedal pressures, steering angles, and vehicle velocities, physiological signals including driver heart rate, skin conductance, and emotion-based sweating on the palms and soles, etc. These multimodal data are collected while driving on city streets and expressways under four different driving task conditions including two kinds of monologues, human-human dialog, and human-machine dialog. We investigated the response timing of drivers against navigator utterances and found that most overlapped with the preceding utterance due to the task characteristics and the features of Japanese. When comparing utterance length, speaking rate, and the filler rate of driver utterances in human-human and human-machine dialogs, we found that drivers tended to use longer and faster utterances with more fillers to talk with humans than machines.".
- 472 hasAuthorList authorList.
- 472 hasTopic Linguistics.
- 472 isPartOf proceedings.
- 472 keyword "Multimedia annotation and processing".
- 472 keyword "Speech resource/database".
- 472 title "In-car Speech Data Collection along with Various Multimodal Signals".