Week Ending 12.20.2020
RESEARCH WATCH: 12.20.2020
This week was active for "Computer Science", with 1,191 new papers.
The paper discussed most in the news over the past week was by a team at Oxford University: "Foundations for Near-Term Quantum Natural Language Processing" by Bob Coecke et al (Dec 2020), which was referenced 55 times, including in the article Moor Insights & Strategy Weekly Update Ending in December 11, 2020 in Moor Insights & Strategy. The paper got social media traction with 29 shares. On Twitter, @coecke posted "(1/2) Here is the 1st of the two QNLP arXiv papers we just publicised, mentioned in the Quantum Daily. It's focus is background and conceptual underpinning".
Leading researcher Pieter Abbeel (University of California, Berkeley) came out with "A Framework for Efficient Robotic Manipulation" @DataScienceNIG tweeted "Welcome, FERM from researchers a framework that can train a robotic arm on 6 grasping tasks in less than 1 hour given only 10 demonstrations utilizing data augmentation, unsupervised and reinforcement learning for sample-efficient training".
The paper shared the most on social media this week is by a team at Google: "Extracting Training Data from Large Language Models" by Nicholas Carlini et al (Dec 2020) with 681 shares. The researchers demonstrate that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model. @shortstein (Thomas Steinke) tweeted "TL;DR: Snippets of the (public) training data can be extracted from GPT-2. 😮 This is an excellent advertisement for differential privacy research if we want to train on private data. 😀 Blogpost: Paper".
This week was very active for "Computer Science - Artificial Intelligence", with 196 new papers.
The paper discussed most in the news over the past week was by a team at Stanford University: "Design Space for Graph Neural Networks" by Jiaxuan You et al (Nov 2020), which was referenced 3 times, including in the article Machine Learning on Knowledge Graphs @ NeurIPS 2020 in Medium.com. The paper got social media traction with 103 shares. The researchers define and systematically study the architectural design space for GNNs which consists of 315,000 different designs over 32 different predictive tasks. On Twitter, @youjiaxuan observed "We are excited to release #GraphGym, a platform for designing and evaluating #GraphNeuralNetworks. It provides a modularized pipeline, a system for launching thousands of experiments, and more! Code: Paper: #NeurIPS2020 Spotlight".
Leading researcher Pieter Abbeel (University of California, Berkeley) came out with "A Framework for Efficient Robotic Manipulation" @DataScienceNIG
The paper shared the most on social media this week is by a team at DeepMind: "Object-based attention for spatio-temporal reasoning: Outperforming neuro-symbolic models with flexible distributed architectures" by David Ding et al (Dec 2020) with 268 shares. @DeepMind (DeepMind) tweeted "Can neural networks learn to perform explanatory & counterfactual reasoning? Researchers find that an object-centric transformer substantially outperforms leading neuro-symbolic models on two reasoning tasks thought to be challenging for deep neural nets".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 293 new papers.
The paper discussed most in the news over the past week was "img2pose: Face Alignment and Detection via 6DoF, Face Pose Estimation" by Vitor Albiero et al (Dec 2020), which was referenced 1 time, including in the article Facebook AI & University of Notre Dame Propose Multi-Face Pose Estimation Without Face Detection in SyncedReview.com. The paper got social media traction with 39 shares. On Twitter, @THassner observed "No more face detection. No more facial landmark detection. Read about the future of digital face processing in our latest work: Grateful to working with such a marvelous team: Vitor Albiero, Xingyu Chen, Xi Yin, and Guan Pang. #computervision".
Leading researcher Luc Van Gool (Computer Vision Laboratory) published "Scaling Semantic Segmentation Beyond 1K Classes on a Single GPU" The investigators propose a novel training methodology to train and scale the existing semantic segmentation models for a large number of semantic classes without increasing the memory overhead.
The paper shared the most on social media this week is by a team at DeepMind: "Object-based attention for spatio-temporal reasoning: Outperforming neuro-symbolic models with flexible distributed architectures" by David Ding et al (Dec 2020)
This week was active for "Computer Science - Computers and Society", with 32 new papers.
This week was active for "Computer Science - Human-Computer Interaction", with 28 new papers.
The paper discussed most in the news over the past week was "Encounters with Visual Misinformation and Labels Across Platforms: An Interview and Diary Study to Inform Ecosystem Approaches to Misinformation Interventions" by Emily Saltz et al (Nov 2020), which was referenced 1 time, including in the article Facebook Takes More Proactive Steps to Alerts Users Who've Engaged with Misinformation in Social Media Today. The paper got social media traction with 26 shares. On Twitter, @CLeibowicz posted ""I thought the manipulated media label meant to tell me the media is manipulating me." ^my favorite user quote from our study. Sums up how much you can learn from from qualitative research. We need more of it. See more here: Thanks".
This week was extremely active for "Computer Science - Learning", with 453 new papers.
The paper discussed most in the news over the past week was by a team at Google: "Underspecification Presents Challenges for Credibility in Modern Machine Learning" by Alexander D'Amour et al (Nov 2020), which was referenced 11 times, including in the article Can Healthcare AI Tools Safely Reduce Rates of Healthcare-Associated Infections? in Hospital EMR and EHR. The paper author, Alex D’Amour, was quoted saying "We are asking more of machine-learning models than we are able to guarantee with our current approach". The paper also got the most social media traction with 1603 shares. A user, @popular_ML, tweeted "The most popular ArXiv tweet in the last 24h", while @julius_adebayo observed "This week's new must read 30 pager. Domain shift and spurious training signals are major open problems in ML".
Leading researcher Pieter Abbeel (University of California, Berkeley) came out with "A Framework for Efficient Robotic Manipulation" @DataScienceNIG
The paper shared the most on social media this week is by a team at Google: "Extracting Training Data from Large Language Models" by Nicholas Carlini et al (Dec 2020)
This week was active for "Computer Science - Multiagent Systems", with 20 new papers.
Over the past week, 26 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was "Hardware Beyond Backpropagation: a Photonic Co-Processor for Direct Feedback Alignment" by Julien Launay et al (Dec 2020), which was referenced 1 time, including in the article At NeurIPS 2020, researchers proposed faster, more efficient alternatives to backpropagation in Venturebeat. The paper got social media traction with 10 shares. The researchers argue that alternative training methods can mitigate these issues, and can inform the design of extreme - scale training hardware. A Twitter user, @Underfox3, commented "In this paper researchers have proposed the first scalable, beyond backpropagation photonic co-processor basedon DFA, able to compute random projections with trillions of parameters. #photonics #DeepLearning".
This week was very active for "Computer Science - Robotics", with 66 new papers.
The paper discussed most in the news over the past week was by a team at Zhejiang University: "EGO-Swarm: A Fully Autonomous and Decentralized Quadrotor Swarm System in Cluttered Environments" by Xin Zhou et al (Nov 2020), which was referenced 4 times, including in the article Watch a swarm of drones fly through heavy forest—while staying in formation in Science Magazine. The paper was shared 3 times in social media. The researchers present a decentralized and asynchronous systematic solution for multi - robot autonomous navigation in unknown obstacle - rich scenes using merely onboard resources.
Leading researcher Pieter Abbeel (University of California, Berkeley) published "A Framework for Efficient Robotic Manipulation" @DataScienceNIG tweeted "Welcome, FERM from researchers a framework that can train a robotic arm on 6 grasping tasks in less than 1 hour given only 10 demonstrations utilizing data augmentation, unsupervised and reinforcement learning for sample-efficient training".
The paper shared the most on social media this week is by a team at UC Berkeley: "ViNG: Learning Open-World Navigation with Visual Goals" by Dhruv Shah et al (Dec 2020) with 51 shares.