Week Ending 1.16.2022

 

RESEARCH WATCH: 1.16.2022

 

Over the past week, 910 new papers were published in "Computer Science".

  • The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier that Validates 301 New Exoplanets" by Hamed Valizadegan et al (Nov 2021), which was referenced 84 times, including in the article ExoMiner: NASA’s Deep Neural Network of 2021 in Medium.com. The paper author, Hamed Valizadegan (Machine learning manager with the Universities Space Research Association at Ames), was quoted saying "When ExoMiner says something is a planet, you can be sure it’s a planet. ExoMiner is highly accurate and in some ways more reliable than both existing machine classifiers and the human experts it’s meant to emulate because of the biases that come with human labeling. Now that we’ve trained ExoMiner using Kepler data, with a little fine-tuning, we can transfer that learning to other missions, including TESS, which we’re currently working on. There’s room to grow." The paper got social media traction with 44 shares. A Twitter user, @storybywill, observed "We didn't. The team who developed ExoMine did, and here is their paper", while @arxiv_cs_LG commented "ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier to Mine Exoplanets. Valizadegan, Martinho, Wilkens, Jenkins, Smith, Caldwell, Twicken, Gerum, Walia, Hausknecht, Lubin, Bryson, and Oza".

  • Leading researcher Trevor Darrell (UC Berkeley) published "A ConvNet for the 2020s" @TacoCohen tweeted "👉 The first law of DL architectures 👈 "Whatever" is all you need 🤯 Any problem that can be solved by transformer / ViT can be solved by MLP / CNN, and vice versaSame for RNNs". This paper was also shared the most on social media with 784 tweets. @TacoCohen (Taco Cohen) tweeted "👉 The first law of DL architectures 👈 "Whatever" is all you need 🤯 Any problem that can be solved by transformer / ViT can be solved by MLP / CNN, and vice versaSame for RNNs".

This week was active for "Computer Science - Artificial Intelligence", with 132 new papers.

  • The paper discussed most in the news over the past week was by a team at DeepMind: "Ethical and social risks of harm from Language Models" by Laura Weidinger et al (Dec 2021), which was referenced 7 times, including in the article How bias creeps into large language models in Analytics India Magazine. Melanie Mitchell (Santa Fe Institute), who is not part of the study, said "[The] ways that we measure performance of these systems needs to be expanded … When the benchmarks are changed a little bit, they [often] don’t generalize well". The paper got social media traction with 61 shares. The investigators aim to help structure the risk landscape associated with large - scale Language Models (LMs). On Twitter, @TobyWalsh posted "The first word is misplaced. The paper is not comprehensive. 30 pages of risks. 3 pages on possible means to mitigates those risks. If it were comprehensive, it would cover mitigation in as much detail as harms".

  • The paper shared the most on social media this week is "An Introduction to Autoencoders" by Umberto Michelucci (Jan 2022) with 152 shares. The authors look at autoencoders. @denocris (Cristiano De Nobili) tweeted "A simple and easy to read intro to Autoencoders! 👏 #deeplearning".

  • The most influential Twitter user discussing papers is AK who shared "Two-Pass End-to-End ASR Model Compression" by Nauman Dawalatabad et al (Jan 2022) and said: "Two-Pass End-to-End ASR Model Compression abs: experimental results on standard LibriSpeech dataset show that system can achieve a high compression rate of 55% without significant degradation in the WER compared to the two-pass teacher model".

Over the past week, 194 new papers were published in "Computer Science - Computer Vision and Pattern Recognition".

  • The paper discussed most in the news over the past week was by a team at Google: "PolyViT: Co-training Vision Transformers on Images, Videos and Audio" by Valerii Likhosherstov et al (Nov 2021), which was referenced 7 times, including in the article Google Research: Themes from 2021 and Beyond in Google AI Blog. The paper got social media traction with 95 shares. On Twitter, @TheSequenceAI observed "PolyViT achieves state-of-the-art results on 5 standard classification datasets. Co-training PolyViT leads to a model that: 1. is even more parameter-efficient 2. learns representations that generalize across multiple domains The original paper: 2/2".

  • Leading researcher Trevor Darrell (UC Berkeley) came out with "A ConvNet for the 2020s" @TacoCohen tweeted "👉 The first law of DL architectures 👈 "Whatever" is all you need 🤯 Any problem that can be solved by transformer / ViT can be solved by MLP / CNN, and vice versaSame for RNNs". This paper was also shared the most on social media with 784 tweets. @TacoCohen (Taco Cohen) tweeted "👉 The first law of DL architectures 👈 "Whatever" is all you need 🤯 Any problem that can be solved by transformer / ViT can be solved by MLP / CNN, and vice versaSame for RNNs".

Over the past week, 24 new papers were published in "Computer Science - Computers and Society".

This week was very active for "Computer Science - Human-Computer Interaction", with 41 new papers.

  • The paper discussed most in the news over the past week was by a team at Institute of Information Security: "Phishing in Organizations: Findings from a Large-Scale and Long-Term Study" by Daniele Lain et al (Dec 2021), which was referenced 12 times, including in the article Anti-phishing training may make people more likely to fall victim to phishing in CyberNews. The paper also got the most social media traction with 351 shares. The investigators present findings from a large - scale and long - term phishing experiment that they have conducted in collaboration with a partner company. A user, @SrdjanCapkun, tweeted "15 months, more than 14,000 participants How susceptible are employees of a large organization to phishing? Does training help? Can employees collectively detect phishing? Answers in this great work by (to appear in IEEE S&P 2022)".

This week was very active for "Computer Science - Learning", with 304 new papers.

Over the past week, 14 new papers were published in "Computer Science - Multiagent Systems".

Over the past week, 12 new papers were published in "Computer Science - Neural and Evolutionary Computing".

This week was active for "Computer Science - Robotics", with 51 new papers.


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.