Week Ending 12.13.2020
RESEARCH WATCH: 12.13.2020
Over the past week, 1,028 new papers were published in "Computer Science".
The paper discussed most in the news over the past week was by a team at Oxford University: "Foundations for Near-Term Quantum Natural Language Processing" by Bob Coecke et al (Dec 2020), which was referenced 52 times, including in the article Cambridge Quantum Computing Posts Foundational... in ADVFN Deutschland. The paper got social media traction with 27 shares. On Twitter, @coecke commented "(1/2) Here is the 1st of the two QNLP arXiv papers we just publicised, mentioned in the Quantum Daily. It's focus is background and conceptual underpinning".
Leading researcher Yoshua Bengio (Université de Montréal) published "Machine Learning for Glacier Monitoring in the Hindu Kush Himalaya" The investigators present a machine learning based approach to support ecological monitoring, with a focus on glaciers.
The paper shared the most on social media this week is by a team at University of Washington: "Brain Co-Processors: Using AI to Restore and Augment Brain Function" by Rajesh P. N. Rao (Dec 2020) with 236 shares. The researchers introduce brain co - processors, devices that combine decoding and encoding in a unified framework using artificial intelligence (AI) to supplement or augment brain function. @ctr4neurotech (Center for Neurotechnology) tweeted "A new article from CNT Co-Director Rajesh Rao describes brain-coprocessors, which use to restore or augment #brain function. The piece also covers applications and ethical implications of this #neurotechnology".
The most influential Twitter user discussing papers is Daniel Roy who shared "Every Model Learned by Gradient Descent Is Approximately a Kernel Machine" by Pedro Domingos (Nov 2020) and said: "well now we have precise statements to stare at".
This week was very active for "Computer Science - Artificial Intelligence", with 193 new papers.
The paper discussed most in the news over the past week was by a team at Google: "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale" by Alexey Dosovitskiy et al (Oct 2020), which was referenced 8 times, including in the article The State of AI in 2020 in Towards Data Science. The paper got social media traction with 190 shares. A Twitter user, @dribnet, observed "Excited about vision transformers and fascinated to uncover whether they seem to see the world the same way as CNNs. So far it appears my CNN generated artwork generalizes well to this architecture with good results on the 4 publicly released ViT models".
Leading researcher Pieter Abbeel (University of California, Berkeley) published "Parallel Training of Deep Networks with Local Updates" @HarshSikka tweeted "This seems like a very promising result! The oft cited reason people don't look at local update rules seriously (from when I last looked into them ~1.5 yrs ago) was that they don't approach the efficiency of backprop. Very cool to see they work in high compute scenarios!".
The paper shared the most on social media this week is by a team at University of Washington: "Brain Co-Processors: Using AI to Restore and Augment Brain Function" by Rajesh P. N. Rao (Dec 2020)
The most influential Twitter user discussing papers is Cliff Pickover who shared "On the Smale Conjecture for Diff$(S^4)$" by Selman Akbulut (Aug 2020) and said: "Imagine the joy of being the world's foremost expert on "K-dabbing." Source".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 272 new papers.
The paper discussed most in the news over the past week was by a team at Google: "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale" by Alexey Dosovitskiy et al (Oct 2020)
Leading researcher Yoshua Bengio (Université de Montréal) published "Machine Learning for Glacier Monitoring in the Hindu Kush Himalaya"
The paper shared the most on social media this week is "Robust Consistent Video Depth Estimation" by Johannes Kopf et al (Dec 2020) with 108 shares. @XuanLuo14 (Xuan Luo) tweeted "Wow! Thrilled to see these challenging scenes being solved!".
The most influential Twitter user discussing papers is Daniel Roy who shared "Every Model Learned by Gradient Descent Is Approximately a Kernel Machine" by Pedro Domingos (Nov 2020)
Over the past week, 26 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was "The De-democratization of AI: Deep Learning and the Compute Divide in Artificial Intelligence Research" by Nur Ahmed et al (Oct 2020), which was referenced 6 times, including in the article Google showed us the danger of letting corporations lead AI researchQuartz in Quartz. The paper also got the most social media traction with 288 shares. A Twitter user, @hyounpark, commented "Great thread by on the need for bias awareness in AI research. The more we learn about #AI, the more we realize that it reifies status quo hierarchies without active governance", while @joftius said "This paper is about a really important problem. One cause it mentions that I would emphasize more: the role of proprietary data. Most of the value of "AI" comes from the users and content moderators on massive platforms that generate/curate data".
The paper shared the most on social media this week is by a team at Mathematical Institute: "Algorithmic risk assessments can alter human decision-making processes in high-stakes government contexts" by Ben Green et al (Dec 2020) with 54 shares. @beausievers (Beau Sievers) tweeted "The rationalist dream is that computer models can help humans transcend their biases. But this is harder than it looks, as the interpretation and use of models is itself undermined by familiar human frailties".
The most influential Twitter user discussing papers is Daniel Roy who shared "Every Model Learned by Gradient Descent Is Approximately a Kernel Machine" by Pedro Domingos (Nov 2020)
Over the past week, 21 new papers were published in "Computer Science - Human-Computer Interaction".
The paper discussed most in the news over the past week was by a team at Harvard University: "Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay of Human and Algorithmic Biases in Online Hiring" by Tom Sühr et al (Dec 2020), which was referenced 4 times, including in the article Understanding Algorithmic Biases & Its Impact On Online Hiring in Analytics India Magazine. The paper was shared 2 times in social media.
The paper shared the most on social media this week is by a team at Mathematical Institute: "Algorithmic risk assessments can alter human decision-making processes in high-stakes government contexts" by Ben Green et al (Dec 2020)
The most influential Twitter user discussing papers is Cliff Pickover who shared "On the Smale Conjecture for Diff(S4)(S4)" by Selman Akbulut (Aug 2020)
This week was very active for "Computer Science - Learning", with 386 new papers.
The paper discussed most in the news over the past week was by a team at Google: "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale" by Alexey Dosovitskiy et al (Oct 2020)
Leading researcher Aaron Courville (Université de Montréal) published "Convex Potential Flows: Universal Probability Distributions with Optimal Transport and Convex Optimization".
The paper shared the most on social media this week is by a team at DeepMind: "Imitating Interactive Intelligence" by Josh Abramson et al (Dec 2020) with 211 shares. The researchers study how to design artificial agents that can interact naturally with humans using the simplification of a virtual environment. @DeepMind (DeepMind) tweeted "Our team studied how to build agents that interact with humans using natural language and physical actions in a simulated environment called the Playroom - based on a large dataset of human-human interactions. (1/3)".
The most influential Twitter user discussing papers is Cliff Pickover who shared "On the Smale Conjecture for Diff$(S^4)$" by Selman Akbulut (Aug 2020)
Over the past week, eight new papers were published in "Computer Science - Multiagent Systems".
The paper shared the most on social media this week is by a team at DeepMind: "Imitating Interactive Intelligence" by Josh Abramson et al (Dec 2020)
The most influential Twitter user discussing papers is Cliff Pickover who shared "On the Smale Conjecture for Diff$(S^4)$" by Selman Akbulut (Aug 2020)
Over the past week, 27 new papers were published in "Computer Science - Neural and Evolutionary Computing".
Leading researcher Pieter Abbeel (University of California, Berkeley) came out with "Parallel Training of Deep Networks with Local Updates" @HarshSikka tweeted "This seems like a very promising result! The oft cited reason people don't look at local update rules seriously (from when I last looked into them ~1.5 yrs ago) was that they don't approach the efficiency of backprop. Very cool to see they work in high compute scenarios!".
The paper shared the most on social media this week is by a team at University of Washington: "Brain Co-Processors: Using AI to Restore and Augment Brain Function" by Rajesh P. N. Rao (Dec 2020)
The most influential Twitter user discussing papers is Daniel Roy who shared "Every Model Learned by Gradient Descent Is Approximately a Kernel Machine" by Pedro Domingos (Nov 2020)
This week was very active for "Computer Science - Robotics", with 73 new papers.
The paper discussed most in the news over the past week was "Waymo Public Road Safety Performance Data" by Matthew Schwall et al (Oct 2020), which was referenced 4 times, including in the article This Arizona college student has taken over 60 driverless Waymo rides in ArsTechnica. The paper author, Mathew Schwall, was quoted saying "Nearly all the actual and simulated events involved one or more road rule violations or other incautious behavior by another agent, including all eight of the most severe events involving actual or expected airbag deployment". The paper was shared 2 times in social media.
Leading researcher Pieter Abbeel (University of California, Berkeley) published "Reset-Free Lifelong Learning with Skill-Space Planning".
The paper shared the most on social media this week is by a team at Google: "iNeRF: Inverting Neural Radiance Fields for Pose Estimation" by Lin Yen-Chen et al (Dec 2020) with 63 shares. The investigators investigate whether they can apply analysis - by - synthesis with NeRF for 6DoF pose estimation - given an image, find the translation and rotation of a camera relative to a 3D model. @AjdDavison (Andrew Davison) tweeted "Aha, camera tracking using implicit scene models. So what about building a full SLAM system? Watch this space... ;".
The most influential Twitter user discussing papers is Cliff Pickover who shared "On the Smale Conjecture for Diff(S4)(S4)" by Selman Akbulut (Aug 2020)