Week Ending 1.31.2021

 

RESEARCH WATCH: 1.31.2021

 
ai-research.png

This week was active for "Computer Science", with 1,108 new papers.

This week was very active for "Computer Science - Artificial Intelligence", with 162 new papers.

This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 217 new papers.

  • The paper discussed most in the news over the past week was by a team at Sorbonne University: "Training data-efficient image transformers & distillation through attention" by Hugo Touvron et al (Dec 2020), which was referenced 5 times, including in the article Distilling Transformers: (DeiT) Data-efficient Image Transformers in Towards Data Science. The paper got social media traction with 296 shares. The researchers produce a competitive convolution - free transformer by training on Imagenet only. A user, @omarsar0, tweeted "DeiT - Transformer-based image classification model built for high performance and requiring less compute & data. Uses distillation through attention and achieves 84.2 top-1 accuracy on the ImageNet benchmark trained on a single 8-GPU server over 3 days".

  • Leading researcher Pieter Abbeel (University of California, Berkeley) came out with "Bottleneck Transformers for Visual Recognition", which had 21 shares over the past 3 days. @hillbig tweeted "Bottleneck Transformer (BoT) just replace conv layers in the last 3 blocks of ResNet with self-attention to improve the performance of object detection, instance segmentation, and image classification (w/ larger resolution in last blocks) significantly". This paper was also shared the most on social media with 152 tweets. @hillbig (Daisuke Okanohara) tweeted "Bottleneck Transformer (BoT) just replace conv layers in the last 3 blocks of ResNet with self-attention to improve the performance of object detection, instance segmentation, and image classification (w/ larger resolution in last blocks) significantly".

This week was active for "Computer Science - Computers and Society", with 42 new papers.

This week was very active for "Computer Science - Human-Computer Interaction", with 53 new papers.

  • The paper discussed most in the news over the past week was by a team at Boston University: "Dissecting the Meme Magic: Understanding Indicators of Virality in Image Memes" by Chen Ling et al (Jan 2021), which was referenced 3 times, including in the article AI Finds Out Formula To Make Memes As Viral As The Bernie Sanders One in TAXI. The paper author, Chen Ling (Hangzhou Dianzi University), was quoted saying "Image memes that are poorly composed are unlikely to be re-shared and go viral, compared to a high-contrast classic portrait that catches viewer’s attention in a museum, for instance". The paper got social media traction with 49 shares. The investigators distinguish image memes that are highly viral on social media from those that do not get re - shared, across three dimensions : composition, subjects, and target audience. On Twitter, @suedemarketing commented "See link attached to read about researchers from prominent universities studying what makes a viral meme. They also developed an AI that can guess if a meme will go viral with 86% accuracy. This is how to stay relevant today!".

This week was very active for "Computer Science - Learning", with 335 new papers.

Over the past week, 16 new papers were published in "Computer Science - Multiagent Systems".

Over the past week, 28 new papers were published in "Computer Science - Neural and Evolutionary Computing".

  • The paper discussed most in the news over the past week was "Can a Fruit Fly Learn Word Embeddings?" by Yuchen Liang et al (Jan 2021), which was referenced 2 times, including in the article Fruit Fly Brain Network Hacked For Language Processing in Discover Magazine. The paper author, Yuchan Liang, was quoted saying "We view this result as an example of a general statement that biologically inspired algorithms might be more compute efficient compared with their classical (non-biological) counterparts". The paper got social media traction with 188 shares. A user, @hurrythas, tweeted "Imagine being a fruit fly, you got like two months to live and mfkrs tryna teach you how to read. Fruits and flying are more important 😤", while @KevinKaichuang said "Learning word embeddings using a neural network inspired by fruit fly brains. Anyways if you need me I'll be teaching a fruit fly to embed proteins".

This week was active for "Computer Science - Robotics", with 51 new papers.


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.