首頁 / 最新消息 / 學術活動

2023/5/1

Detection of dyadic motor synchrony based on automated facial action coding of simultaneous video recordings

演講相關內容如下:
===========================================================================================
時間:2023-05-05(五)13:10-15:00
地點:社科院北棟2樓心理系階梯教室
講題:Detection of dyadic motor synchrony based on automated facial action coding of simultaneous video recordings
講者:許鈞庭 博士(Research Scientist at Psychological Process Research Team, Guardian Robot Project, RIKEN, Japan)
演講語言:英文(問答用中英文皆可) 

摘要:
Real-life and second-person neuroscience aims to improve ecological validity using bidirectional interaction designs and symmetric measurements to reveal dyadic physiological or neural couplings. A live image relay system was employed to deliver models’ real-time performance of positive (smiling) and negative (frowning) dynamic facial expressions or their prerecorded videos to participants. Previous analysis using facial electromyograms (fEMG) of the zygomaticus major (ZM) and corrugator supercilii (CS) muscles revealed enhanced spontaneous facial mimicry, enhanced right mirror neuron system activity, and functional connectivity within when observing live facial expressions. Several studies have used an automated Facial Action Coding System (FACS, e.g., OpenFace) to detect spontaneous facial mimicry. Still, the validity and reliability for this purpose, compared to fEMG, have not been evaluated. In a basic sanity check for detecting facial movements and mimicry, FaceReader 9 (Noldus Information Technology B.V.), OpenFace 2.0, and Py-Feat 0.4.0 showed around chance-level performance for sensitivity, specificity, and predictive values, with FaceReader 9 (East Asian model, bilateral) performing slightly better. In the present analysis, I used FaceReader 9 to detect dyadic facial movement coupling from facial video recordings. Participants’ action unit (PAU) 12 (lip corner puller) responses significantly correlated with their ZM responses, but PAU4 (brow lowerer) responses did not correlate with their CS responses. PAU12 responses reproduced the interaction effect between emotional and presentation conditions, showing that live conditions enhanced facial mimicry. Cross-correlation between model AU (MAU) 12 and PAU12 amplitude up to 27 lags in steps of 1/30 seconds (30 fps) showed that live performances elicited a faster and stronger facial mimicry. Dynamic time warping between PAU12 and MAU12 time series pairs showed that the AU temporal patterns were significantly more similar for live performances. The multilevel vector autoregressive regression revealed a negative-positive-negative pattern of temporal effects during the spontaneous facial mimicry, which progressed faster when viewing live performances than prerecorded video. The results indicated that automated FACS coding of high-quality facial video recordings is not the equivalent of fEMG for mimicry detection. However, when appropriately analyzed, it is useful for dyadic synchrony detection.
 
講者個人簡介或CV:
請見附檔。
 
個人網站:
 
參考文獻:
請見附檔。
===========================================================================================
敬請準時出席,謝謝。