2019.09.22 Interaction papers

 

09-18-2019

A High-Fidelity Open Embodied Avatar with Lip Syncing and Expression Capabilities
by Deepali Aneja et al

09-19-2019

Slices of Attention in Asynchronous Video Job Interviews
by Léo Hemamou et al

09-19-2019

Human-In-The-Loop Learning of Qualitative Preference Models
by Joseph Allen et al

09-18-2019

Conversational AI : Open Domain Question Answering and Commonsense Reasoning
by Kinjal Basu

09-18-2019

Real-time Recognition of Smartphone User Behavior Based on Prophet Algorithms
by Chunmin Mi et al

09-18-2019

Extracting Super-resolution Details Directly from a Diffraction-Blurred Image or Part of Its Frequency Spectrum
by Edward Y. Sheffield

09-17-2019

Multimodal Continuation-style Architectures for Human-Robot Interaction
by Nikhil Krishnaswamy et al

09-19-2019

Open Challenges of Blind People using Smartphones
by André Rodrigues et al

09-19-2019

Towards humane digitization: a wellbeing-driven process of personas creation
by Irawan Nurhas et al

09-19-2019

Learning to Conceal: A Deep Learning Based Method for Preserving Privacy and Avoiding Prejudice
by Moshe Hanukoglu et al

09-18-2019

A Human-Centered Data-Driven Planner-Actor-Critic Architecture via Logic Programming
by Daoming Lyu et al

09-18-2019

Emotion Filtering at the Edge
by Ranya Aloufi et al

09-17-2019

RTTD-ID: Tracked Captions with Multiple Speakers for Deaf Students
by Raja Kushalnagar et al

09-20-2019

An Experimental Comparison of Map-like Visualisations and Treemaps
by Patrick Cheong-Iao Pang et al

09-20-2019

Quantifying the Impact of Cognitive Biases in Question-Answering Systems
by Keith Burghardt et al

 
Craig Smith