Week Ending 1.3.2021
RESEARCH WATCH: 1.3.2021
Over the past week, 692 new papers were published in "Computer Science".
The paper discussed most in the news over the past week was by a team at University of Cambridge: "Hey Alexa what did I just type? Decoding smartphone sounds with a voice assistant" by Almos Zarandy et al (Dec 2020), which was referenced 8 times, including in the article Voice Assistants Can Store And Leak Texts Typed On Smartphones In Proximity in Latest Hacking News. The paper got social media traction with 31 shares. The researchers show that privacy threats go beyond spoken conversations and include sensitive data typed on nearby smartphones. A user, @TwitchiH, tweeted ""...can extract PIN codes and text messages from recordings collected by a voice assistant located up to half a meter away. This shows that remote keyboard-inference attacks are not limited to physical keyboards but extend to virtual keyboards too."".
Leading researcher Kyunghyun Cho (New York University) came out with "Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization".
The paper shared the most on social media this week is by a team at Adobe: "Out of Order: How important is the sequential order of words in a sentence in Natural Language Understanding tasks?" by Thang M. Pham et al (Dec 2020) with 280 shares. @KiddoThe2B (Ilias Chalkidis) tweeted "So instead of burning 🌳 and 💸 in order to hack NLU challenges and do PR, maybe it's a better idea to spend some resources to curate datasets?".
This week was active for "Computer Science - Artificial Intelligence", with 102 new papers.
The paper discussed most in the news over the past week was by a team at Stanford University: "Design Space for Graph Neural Networks" by Jiaxuan You et al (Nov 2020), which was referenced 4 times, including in the article Interesting papers I read from NeurIPS2020 in Towards Data Science. The paper got social media traction with 104 shares. The researchers define and systematically study the architectural design space for GNNs which consists of 315,000 different designs over 32 different predictive tasks. On Twitter, @youjiaxuan posted "We are excited to release #GraphGym, a platform for designing and evaluating #GraphNeuralNetworks. It provides a modularized pipeline, a system for launching thousands of experiments, and more! Code: Paper: #NeurIPS2020 Spotlight".
Leading researcher Sergey Levine (University of California, Berkeley) came out with "Model-Based Visual Planning with Self-Supervised Functional Distances".
The paper shared the most on social media this week is by a team at Adobe: "Out of Order: How important is the sequential order of words in a sentence in Natural Language Understanding tasks?" by Thang M. Pham et al (Dec 2020)
Over the past week, 142 new papers were published in "Computer Science - Computer Vision and Pattern Recognition".
The paper discussed most in the news over the past week was by a team at City University of Hong Kong: "Is a Green Screen Really Necessary for Real-Time Portrait Matting?" by Zhanghan Ke et al (Nov 2020), which was referenced 3 times, including in the article 2020: A Year Full of Amazing AI Papers — A Review in KDNuggets. The paper got social media traction with 118 shares. A user, @arxiv_pop, tweeted "2020/11/24 投稿 1位 CV(Computer Vision and Pattern Recognition) Is a Green Screen Really Necessary for Real-Time Human Matting? 6 Tweets 37 Retweets 262 Favorites", while @AkiraTOSEI said "A study of real-time human image cropping. The strategy is to train the model with supervised manner, then self-supervised train with unlabeled data to adjust the model for consistency of three tasks, boundary predictions for low/high resolution mask".
Leading researcher Sergey Levine (University of California, Berkeley) published "Model-Based Visual Planning with Self-Supervised Functional Distances".
The paper shared the most on social media this week is by a team at Southeast University: "TransPose: Towards Explainable Human Pose Estimation by Transformer" by Sen Yang et al (Dec 2020) with 86 shares. @popular_ML (Popular ML resources) tweeted "The most popular ArXiv tweet in the last 24h".
Over the past week, 13 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was by a team at Carnegie Mellon University: "Leveraging Administrative Data for Bias Audits: Assessing Disparate Coverage with Mobility Data for COVID-19 Policy" by Amanda Coston et al (Nov 2020), which was referenced 2 times, including in the article Researchers: Detailed Mobility Data Can Help Target COVID-19 Hot Spots in Civil Beat.
The paper shared the most on social media this week is "Fairness in Machine Learning" by Luca Oneto et al (Dec 2020) with 69 shares. The authors discuss some of the limitations present in the current reasoning about fairness and in methods that deal with it, and describe some work done by the authors to address them.
Over the past week, 17 new papers were published in "Computer Science - Human-Computer Interaction".
The paper shared the most on social media this week is by a team at Warsaw University of Technology: "dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python" by Hubert Baniecki et al (Dec 2020) with 121 shares. @psteinb_ (Peter Steinbach) tweeted "I'll add this to the toolbox. Great to see get the praise and attention well deserved! His book is my go-to resource for explainable ML".
This week was active for "Computer Science - Learning", with 206 new papers.
The paper discussed most in the news over the past week was by a team at University of Cambridge: "Hey Alexa what did I just type? Decoding smartphone sounds with a voice assistant" by Almos Zarandy et al (Dec 2020)
Leading researcher Kyunghyun Cho (New York University) came out with "Catastrophic Fisher Explosion: Early Phase Fisher Matrix Impacts Generalization".
The paper shared the most on social media this week is by a team at Adobe: "Out of Order: How important is the sequential order of words in a sentence in Natural Language Understanding tasks?" by Thang M. Pham et al (Dec 2020)
Over the past week, ten new papers were published in "Computer Science - Multiagent Systems".
Over the past week, four new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was by a team at University of Oxford: "Generalization bounds for deep learning" by Guillermo Valle-Pérez et al (Dec 2020), which was referenced 1 time, including in the article Deep Neural Networks are biased, at initialisation, towards simple functions in Towards Data Science. The paper got social media traction with 63 shares. The researchers introduce desiderata for techniques that predict generalization errors for deep learning models in supervised learning. A Twitter user, @guillefix, observed "I’m super excited to release this! What do we want from a generalization theory of deep learning? We propose 7 desiderata (Ds), review how existing bounds do at them, and show that a marginal-likelihood PAC-Bayes bound does better at most Ds".
Over the past week, 37 new papers were published in "Computer Science - Robotics".
The paper discussed most in the news over the past week was "Evaluating Agents without Rewards" by Brendon Matusch et al (Dec 2020), which was referenced 1 time, including in the article Are RL Agents More Humanlike When Not Seeking Rewards? New Research from Vector Institute, University of Toronto & Google Brain in SyncedReview.com. The paper got social media traction with 59 shares. On Twitter, @CShorten30 commented "Evaluating Agents without Rewards 🍪 "To accelerate the development of intrinsic objectives, we retrospectively compute potential objectives on pre-collected datasets of agent behavior, rather than optimizing them online"".
Leading researcher Sergey Levine (University of California, Berkeley) came out with "Model-Based Visual Planning with Self-Supervised Functional Distances".