Week Ending 11.1.2020
RESEARCH WATCH: 11.1.2020
This week was active for "Computer Science - Artificial Intelligence", with 143 new papers.
The paper discussed most in the news over the past week was "Its Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners" by Timo Schick et al (Sep 2020), which was referenced 9 times, including in the article GPT-3 vs PET: Not Big but Beautiful! in Towards Data Science. Anna Rogers (University of Massachusetts Lowell), who is not part of the study, said "More data & compute = SOTA". The paper got social media traction with 368 shares. The investigators show that performance similar to GPT-3 can be obtained with language models whose parameter count is several orders of magnitude smaller. A user, @timo_schick, tweeted "🎉 New paper 🎉 We show that language models are few-shot learners even if they have far less than 175B parameters. Our method performs similar to GPT-3 on SuperGLUE after training on 32 examples with just 0.1% of its parameter count: #NLProc".
Leading researcher Sergey Levine (University of California, Berkeley) came out with "Conservative Safety Critics for Exploration" The researchers target the problem of safe exploration in RL by learning a conservative safety estimate of environment states through a critic, and provably upper bound the likelihood of catastrophic failures at every training iteration.
The paper shared the most on social media this week is by a team at University of Toronto: "Scientific intuition inspired by machine learning generated hypotheses" by Pascal Friederich et al (Oct 2020) with 84 shares. The researchers shift the focus on the insights and the knowledge obtained by the machine learning models themselves. @ssiddhant_ (Siddhant Sharma) tweeted "Woah! This is something nice and looking at all the cool possibility applying the approach to computational chemistry 💻".
The most influential Twitter user discussing papers is K Ken Nakamura who shared "General Procedure of Gauge Fixings and Ghosts" by Nobuyoshi Ohta (Oct 2020) and said: "Gauge fixings and Ghosts: The topic I want to understand. 以前から理解したいと思っていたゴーストの話。".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 234 new papers.
The paper discussed most in the news over the past week was "Towards Hardware-Agnostic Gaze-Trackers" by Jatin Sharma et al (Oct 2020), which was referenced 2 times, including in the article Microsoft Releases Gaze-Tracking System That Works On Any Device in Analytics India Magazine. The paper got social media traction with 9 shares.
Leading researcher Ruslan Salakhutdinov (Carnegie Mellon University) published "Unsupervised Domain Adaptation for Visual Navigation".
The paper shared the most on social media this week is by a team at Carnegie Mellon University: "Understanding the Failure Modes of Out-of-Distribution Generalization" by Vaishnavh Nagarajan et al (Oct 2020) with 89 shares.
The most influential Twitter user discussing papers is K Ken Nakamura who shared "General Procedure of Gauge Fixings and Ghosts" by Nobuyoshi Ohta (Oct 2020)
This week was active for "Computer Science - Computers and Society", with 35 new papers.
The paper discussed most in the news over the past week was "We Dont Speak the Same Language: Interpreting Polarization through Machine Translation" by Ashiqur R. KhudaBukhsh et al (Oct 2020), which was referenced 41 times, including in the article MIL-OSI Global: Fox News viewers write about ‘BLM’ the same way CNN viewers write about ‘KKK’ in Foreign Affairs.co.nz. The paper author, Mark S. Kamlet (University Professor of Economics and Public Policy), was quoted saying "Some of these so-called misaligned pairs seem pretty obvious". The paper got social media traction with 20 shares. A Twitter user, @hrksrkr, commented "Research suggests that polarization in the political sphere has become so extreme that supporters literally express the same sentiments in different languages. One of the lead authors is my brother who just completed his bachelor's CS".
Leading researcher Yoshua Bengio (Université de Montréal) published "COVI-AgentSim: an Agent-based Model for Evaluating Methods of Digital Contact Tracing".
This week was active for "Computer Science - Human-Computer Interaction", with 27 new papers.
The paper discussed most in the news over the past week was "Towards Hardware-Agnostic Gaze-Trackers" by Jatin Sharma et al (Oct 2020)
This week was extremely active for "Computer Science - Learning", with 495 new papers.
The paper discussed most in the news over the past week was by a team at University of Waterloo: "Less Than One-Shot Learning: Learning N Classes From MThat's so cool! Research "Less than one-shot learning can teach a model to identify more objects than the number of examples it is trained on." Article: Paper".
Leading researcher Yoshua Bengio (Université de Montréal) published "COVI-AgentSim: an Agent-based Model for Evaluating Methods of Digital Contact Tracing".
The paper shared the most on social media this week is by a team at Google: "Language ID in the Wild: Unexpected Challenges on the Path to a Thousand-Language Web Text Corpus" by Isaac Caswell et al (Oct 2020) with 148 shares. @_arohan_ (Rohan Anil) tweeted "We all know there is going to be trillion parameter models on trillion or more tokens. But before we do that we all need to start from here, a crucial step of cleaning data, identifying language, read more below 👇".
Over the past week, 16 new papers were published in "Computer Science - Multiagent Systems".
The paper discussed most in the news over the past week was by a team at Stanford University: "Competing AI: How does competition feedback affect machine learning?" by Antonio Ginart et al (Sep 2020), which was referenced 2 times, including in the article When algorithms compete, who wins? in Stanford University. The paper author, Antonio Ginart (Stanford University), was quoted saying "By winning customers, they’re getting a new set of data from those customers, and then by updating their models on this new set of data, they’re actually then changing the model and biasing it toward the new customers they’ve won over". The paper got social media traction with 9 shares. A Twitter user, @arXiv__ml, posted "#machinelearning This papers studies how competition affects machine learning (ML) predictors. As ML becomes more ubiquitous, it is often deploy".
Leading researcher Yoshua Bengio (Université de Montréal) published "COVI-AgentSim: an Agent-based Model for Evaluating Methods of Digital Contact Tracing".
Over the past week, 31 new papers were published in "Computer Science - Neural and Evolutionary Computing".
Leading researcher Luc Van Gool (Computer Vision Laboratory) came out with "Neural Architecture Search of SPD Manifold Networks" The investigators propose a new neural architecture search (NAS) problem of Symmetric Positive Definite (SPD) manifold networks.
This week was very active for "Computer Science - Robotics", with 87 new papers.
The paper discussed most in the news over the past week was "Towards Target-Driven Visual Navigation in Indoor Scenes via Generative Imitation Learning" by Qiaoyun Wu et al (Sep 2020), which was referenced 1 time, including in the article A system to improve a robot's indoor navigation in Tech Xplore.
Leading researcher Ruslan Salakhutdinov (Carnegie Mellon University) published "Unsupervised Domain Adaptation for Visual Navigation".