Week Ending 1.16.2022
RESEARCH WATCH: 1.16.2022
Over the past week, 910 new papers were published in "Computer Science".
The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier that Validates 301 New Exoplanets" by Hamed Valizadegan et al (Nov 2021), which was referenced 84 times, including in the article ExoMiner: NASA’s Deep Neural Network of 2021 in Medium.com. The paper author, Hamed Valizadegan (Machine learning manager with the Universities Space Research Association at Ames), was quoted saying "When ExoMiner says something is a planet, you can be sure it’s a planet. ExoMiner is highly accurate and in some ways more reliable than both existing machine classifiers and the human experts it’s meant to emulate because of the biases that come with human labeling. Now that we’ve trained ExoMiner using Kepler data, with a little fine-tuning, we can transfer that learning to other missions, including TESS, which we’re currently working on. There’s room to grow." The paper got social media traction with 44 shares. A Twitter user, @storybywill, observed "We didn't. The team who developed ExoMine did, and here is their paper", while @arxiv_cs_LG commented "ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier to Mine Exoplanets. Valizadegan, Martinho, Wilkens, Jenkins, Smith, Caldwell, Twicken, Gerum, Walia, Hausknecht, Lubin, Bryson, and Oza".
Leading researcher Trevor Darrell (UC Berkeley) published "A ConvNet for the 2020s" @TacoCohen tweeted "👉 The first law of DL architectures 👈 "Whatever" is all you need 🤯 Any problem that can be solved by transformer / ViT can be solved by MLP / CNN, and vice versaSame for RNNs". This paper was also shared the most on social media with 784 tweets. @TacoCohen (Taco Cohen) tweeted "👉 The first law of DL architectures 👈 "Whatever" is all you need 🤯 Any problem that can be solved by transformer / ViT can be solved by MLP / CNN, and vice versaSame for RNNs".
This week was active for "Computer Science - Artificial Intelligence", with 132 new papers.
The paper discussed most in the news over the past week was by a team at DeepMind: "Ethical and social risks of harm from Language Models" by Laura Weidinger et al (Dec 2021), which was referenced 7 times, including in the article How bias creeps into large language models in Analytics India Magazine. Melanie Mitchell (Santa Fe Institute), who is not part of the study, said "[The] ways that we measure performance of these systems needs to be expanded … When the benchmarks are changed a little bit, they [often] don’t generalize well". The paper got social media traction with 61 shares. The investigators aim to help structure the risk landscape associated with large - scale Language Models (LMs). On Twitter, @TobyWalsh posted "The first word is misplaced. The paper is not comprehensive. 30 pages of risks. 3 pages on possible means to mitigates those risks. If it were comprehensive, it would cover mitigation in as much detail as harms".
The paper shared the most on social media this week is "An Introduction to Autoencoders" by Umberto Michelucci (Jan 2022) with 152 shares. The authors look at autoencoders. @denocris (Cristiano De Nobili) tweeted "A simple and easy to read intro to Autoencoders! 👏 #deeplearning".
The most influential Twitter user discussing papers is AK who shared "Two-Pass End-to-End ASR Model Compression" by Nauman Dawalatabad et al (Jan 2022) and said: "Two-Pass End-to-End ASR Model Compression abs: experimental results on standard LibriSpeech dataset show that system can achieve a high compression rate of 55% without significant degradation in the WER compared to the two-pass teacher model".
Over the past week, 194 new papers were published in "Computer Science - Computer Vision and Pattern Recognition".
The paper discussed most in the news over the past week was by a team at Google: "PolyViT: Co-training Vision Transformers on Images, Videos and Audio" by Valerii Likhosherstov et al (Nov 2021), which was referenced 7 times, including in the article Google Research: Themes from 2021 and Beyond in Google AI Blog. The paper got social media traction with 95 shares. On Twitter, @TheSequenceAI observed "PolyViT achieves state-of-the-art results on 5 standard classification datasets. Co-training PolyViT leads to a model that: 1. is even more parameter-efficient 2. learns representations that generalize across multiple domains The original paper: 2/2".
Leading researcher Trevor Darrell (UC Berkeley) came out with "A ConvNet for the 2020s" @TacoCohen tweeted "👉 The first law of DL architectures 👈 "Whatever" is all you need 🤯 Any problem that can be solved by transformer / ViT can be solved by MLP / CNN, and vice versaSame for RNNs". This paper was also shared the most on social media with 784 tweets. @TacoCohen (Taco Cohen) tweeted "👉 The first law of DL architectures 👈 "Whatever" is all you need 🤯 Any problem that can be solved by transformer / ViT can be solved by MLP / CNN, and vice versaSame for RNNs".
Over the past week, 24 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was by a team at DeepMind: "Ethical and social risks of harm from Language Models" by Laura Weidinger et al (Dec 2021)
This week was very active for "Computer Science - Human-Computer Interaction", with 41 new papers.
The paper discussed most in the news over the past week was by a team at Institute of Information Security: "Phishing in Organizations: Findings from a Large-Scale and Long-Term Study" by Daniele Lain et al (Dec 2021), which was referenced 12 times, including in the article Anti-phishing training may make people more likely to fall victim to phishing in CyberNews. The paper also got the most social media traction with 351 shares. The investigators present findings from a large - scale and long - term phishing experiment that they have conducted in collaboration with a partner company. A user, @SrdjanCapkun, tweeted "15 months, more than 14,000 participants How susceptible are employees of a large organization to phishing? Does training help? Can employees collectively detect phishing? Answers in this great work by (to appear in IEEE S&P 2022)".
This week was very active for "Computer Science - Learning", with 304 new papers.
The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier that Validates 301 New Exoplanets" by Hamed Valizadegan et al (Nov 2021)
Leading researcher Luc Van Gool (Computer Vision Laboratory) published "End-To-End Optimization of LiDAR Beam Configuration for 3D Object Detection and Localization" The researchers take a new route to learn to optimize the LiDAR beam configuration for a given application. @summarizedml tweeted "A reinforcement learning-basedForgeModLoaderlearning-to-optimize (RL-L2O) framework to learn to optimize the LiD 📄".
The paper shared the most on social media this week is "Deep Symbolic Regression for Recurrent Sequences" by Stéphane d'Ascoli et al (Jan 2022) with 316 shares. The authors train Transformers to infer the function or recurrence relation underlying sequences of integers or floats, a typical task in human IQ tests which has hardly been tackled in the machine learning literature. @HochreiterSepp (Sepp Hochreiter) tweeted "ArXiv Predicting symbolic functions from values via transformers: outperforms built-in Mathematica functions for recurrence prediction. Gives approximations to different functions which might be useful for efficient implementations".
The most influential Twitter user discussing papers is AK who shared "Two-Pass End-to-End ASR Model Compression" by Nauman Dawalatabad et al (Jan 2022)
Over the past week, 14 new papers were published in "Computer Science - Multiagent Systems".
The paper discussed most in the news over the past week was by a team at DeepMind: "Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria" by Kavya Kopparapu et al (Jan 2022), which was referenced 1 time, including in the article Harvard and DeepMind’s “Hidden Agenda” might solve the problem of multi-agent cooperation in Analytics India Magazine. The paper got social media traction with 12 shares. A user, @summarizedml, tweeted "Hidden Agenda is a 2D social deduction game that enables Reinforcement Learning agents to learn a variety of behaviors in scenarios of unknownteam alignment. 📄", while @ak92501 commented "Hidden Agenda: a Social Deduction Game with Diverse Learned Equilibria abs: present Hidden Agenda, a two-team social deduction game that provides a 2D environment for studying learning agents in scenarios of unknown team alignment".
Over the past week, 12 new papers were published in "Computer Science - Neural and Evolutionary Computing".
This week was active for "Computer Science - Robotics", with 51 new papers.
The paper discussed most in the news over the past week was "PIEEG: Turn a Raspberry Pi into a Brain-Computer-Interface to measure biosignals" by Ildar Rakhmatulin et al (Jan 2022), which was referenced 6 times, including in the article Turning a Raspberry Pi Into a Brain-Computer Interface? Researchers Open-Source the Low-Cost, High-Precision PIEEG in SyncedReview.com. The paper got social media traction with 65 shares. The authors present an inexpensive, high - precision, but at the same time, easy - to - maintain PIEEG board to convert a RaspberryPI to a Brain - computer interface. A Twitter user, @M157q_News_RSS, commented "Turn a Raspberry Pi into a Brain-Computer-Interface to Measure Biosignals Article URL: Comments URL: Points: 107 # Comments: 25", while @NeuroTechZH commented "Congratulations it's great work you're doing bringing EEG to the Raspberry Pi platform. #bci #iot #eeg".
Leading researcher Luc Van Gool (Computer Vision Laboratory) came out with "End-To-End Optimization of LiDAR Beam Configuration for 3D Object Detection and Localization" The researchers take a new route to learn to optimize the LiDAR beam configuration for a given application. @summarizedml tweeted "A reinforcement learning-basedForgeModLoaderlearning-to-optimize (RL-L2O) framework to learn to optimize the LiD 📄".