Week Ending 4.4.2021

 

RESEARCH WATCH: 4.4.2021

 
ai-research.png

This week was active for "Computer Science", with 1,339 new papers.

  • The paper discussed most in the news over the past week was "Variational inference with a quantum computer" by Marcello Benedetti et al (Mar 2021), which was referenced 60 times, including in the article Cambridge Quantum Computing Pioneers Quantum Machine Learning Methods for Reasoning in Yahoo! Finance. The paper author, Matthias Rosenkranz, was quoted saying "cannot offer simple explanations for their answers and struggle when asked how confident they are on certain possible outcomes". The paper got social media traction with 14 shares. On Twitter, @rosenkranz commented "Our new paper "Variational inference with a #quantum computer" has been out for a few days 🎉. We develop the methods, then demonstrate them using a few graphical models (e.g. hidden Markov). #QuantumComputing #MachineLearning".

  • Leading researcher Pieter Abbeel (University of California, Berkeley) came out with "Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis", which had 22 shares over the past 2 days. @JimmyMWhitaker tweeted "These are getting crazy good".

  • The paper shared the most on social media this week is by a team at Google: "EfficientNetV2: Smaller Models and Faster Training" by Mingxing Tan et al (Apr 2021) with 429 shares. The researchers introduce EfficientNetV2, a new family of convolutional networks that have faster training speed and better parameter efficiency than previous models. @arankomatsuzaki (Aran Komatsuzaki) tweeted "EfficientNetV2: Smaller Models and Faster Training With progressive learning, EfficientNetV2 significantly outperforms previous models on ImageNet, including ViT by 2.0% acc. while training 5x-11x faster. abs: code".

This week was very active for "Computer Science - Artificial Intelligence", with 193 new papers.

  • The paper discussed most in the news over the past week was by a team at Peking University: "Transformer in Transformer" by Kai Han et al (Feb 2021), which was referenced 6 times, including in the article Transformers: An exciting revolution from text to videos in Medium.com. The paper got social media traction with 177 shares. The investigators propose a novel Transformer - iN - Transformer (TNT) model for modeling both patch - level and pixel - level representation. A user, @chriswolfvision, tweeted "2014: "Network in Network" 2016: "Learning to learn by gradient descent by gradient descent" 2021: "Transformers in Transformers" 2025: "All you need is all you need"?".

  • Leading researcher Pieter Abbeel (University of California, Berkeley) published "Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis", which had 22 shares over the past 2 days. @JimmyMWhitaker tweeted "These are getting crazy good".

  • The paper shared the most on social media this week is by a team at University of Sheffield: "Using Artificial Intelligence to Shed Light on the Star of Biscuits: The Jaffa Cake" by H. F. Stevance (Mar 2021) with 255 shares. @CosmicRami (Rami Mandow 🏳️‍🌈) tweeted "Hands down the best paper this year on arXiv. Should size, environment & visible properties determine if <object> is in fact legit? Two classifiers were brought in to help answer this important question. Object = Jaffa Cakes".

This week was very active for "Computer Science - Computer Vision and Pattern Recognition", with 489 new papers.

Over the past week, 24 new papers were published in "Computer Science - Computers and Society".

This week was very active for "Computer Science - Human-Computer Interaction", with 40 new papers.

This week was very active for "Computer Science - Learning", with 397 new papers.

  • The paper discussed most in the news over the past week was "Variational inference with a quantum computer" by Marcello Benedetti et al (Mar 2021)

  • Leading researcher Aaron Courville (Université de Montréal) came out with "Touch-based Curiosity for Sparse-Reward Tasks" The researchers leverage surprise from mismatches in touch feedback to guide exploration in hard sparse - reward reinforcement learning tasks.

  • The paper shared the most on social media this week is "StyleCLIP: Text-Driven Manipulation of StyleGAN Imagery" by Or Patashnik et al (Mar 2021) with 345 shares. The researchers explore leveraging the power of recently introduced Contrastive Language - Image Pre - training (CLIP) models in order to develop a text - based interface for StyleGAN image manipulation that does not require such manual effort. @EladRichardson (Elad Richardson) tweeted "Really cool work by et al on text-based image manipulation! I keep getting amazed by the diverse knowledge that the CLIP model contains and the flexibility of text-based manipulation".

Over the past week, 18 new papers were published in "Computer Science - Multiagent Systems".

Over the past week, 18 new papers were published in "Computer Science - Neural and Evolutionary Computing".

This week was very active for "Computer Science - Robotics", with 100 new papers.


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.