Week Ending 1.31.2021
RESEARCH WATCH: 1.31.2021
This week was active for "Computer Science", with 1,108 new papers.
The paper discussed most in the news over the past week was by a team at Carnegie Mellon University: "Fringe News Networks: Dynamics of US News Viewership following the 2020 Presidential Election" by Ashiqur R. KhudaBukhsh et al (Jan 2021), which was referenced 35 times, including in the article Don’t Blame Fox News for the Attack on the Capitol in Homeland Security News Wire. The paper got social media traction with 6 shares. On Twitter, @khudabukhsh commented "New research analyzing the 64 tumultuous days in American history starting from Nov 3, 20 to Jan 5, 21. Joint work with Mark Kamlet , Paper: #USElection2020 #uselectionresults2020 #FoxNews #Newsmax #NLP".
Leading researcher Oriol Vinyals (DeepMind) published "The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors" @yapp1e tweeted "The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors. Although deep reinforcement learning has led to breakthroughs in many difficult domains, these successes have required an ever-increa".
The paper shared the most on social media this week is "TT-Rec: Tensor Train Compression for Deep Learning Recommendation Models" by Chunxing Yin et al (Jan 2021) with 168 shares. The researchers demonstrate the promising potential of Tensor Train decomposition for DLRMs (TT - Rec), an important yet under - investigated context. @popular_ML (Popular ML resources) tweeted "The most popular ArXiv tweet in the last 24h".
This week was very active for "Computer Science - Artificial Intelligence", with 162 new papers.
The paper discussed most in the news over the past week was by a team at Google: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity" by William Fedus et al (Jan 2021), which was referenced 12 times, including in the article Six Times Bigger than GPT-3: Inside Google’s TRILLION Parameter Switch Transformer Model in KDNuggets. The paper also got the most social media traction with 788 shares. A user, @LiamFedus, tweeted "Pleased to share new work! We design a sparse language model that scales beyond a trillion parameters. These versions are significantly more sample efficient and obtain up to 4-7x speed-ups over popular models like T5-Base, T5-Large, T5-XXL. Preprint".
Leading researcher Oriol Vinyals (DeepMind) came out with "The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors" @yapp1e tweeted "The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors. Although deep reinforcement learning has led to breakthroughs in many difficult domains, these successes have required an ever-increa".
The paper shared the most on social media this week is by a team at Google: "Bottleneck Transformers for Visual Recognition" by Aravind Srinivas et al (Jan 2021) with 152 shares. @hillbig (Daisuke Okanohara) tweeted "Bottleneck Transformer (BoT) just replace conv layers in the last 3 blocks of ResNet with self-attention to improve the performance of object detection, instance segmentation, and image classification (w/ larger resolution in last blocks) significantly".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 217 new papers.
The paper discussed most in the news over the past week was by a team at Sorbonne University: "Training data-efficient image transformers & distillation through attention" by Hugo Touvron et al (Dec 2020), which was referenced 5 times, including in the article Distilling Transformers: (DeiT) Data-efficient Image Transformers in Towards Data Science. The paper got social media traction with 296 shares. The researchers produce a competitive convolution - free transformer by training on Imagenet only. A user, @omarsar0, tweeted "DeiT - Transformer-based image classification model built for high performance and requiring less compute & data. Uses distillation through attention and achieves 84.2 top-1 accuracy on the ImageNet benchmark trained on a single 8-GPU server over 3 days".
Leading researcher Pieter Abbeel (University of California, Berkeley) came out with "Bottleneck Transformers for Visual Recognition", which had 21 shares over the past 3 days. @hillbig tweeted "Bottleneck Transformer (BoT) just replace conv layers in the last 3 blocks of ResNet with self-attention to improve the performance of object detection, instance segmentation, and image classification (w/ larger resolution in last blocks) significantly". This paper was also shared the most on social media with 152 tweets. @hillbig (Daisuke Okanohara) tweeted "Bottleneck Transformer (BoT) just replace conv layers in the last 3 blocks of ResNet with self-attention to improve the performance of object detection, instance segmentation, and image classification (w/ larger resolution in last blocks) significantly".
This week was active for "Computer Science - Computers and Society", with 42 new papers.
The paper discussed most in the news over the past week was by a team at Carnegie Mellon University: "Fringe News Networks: Dynamics of US News Viewership following the 2020 Presidential Election" by Ashiqur R. KhudaBukhsh et al (Jan 2021)
The paper shared the most on social media this week is by a team at Google: "Re-imagining Algorithmic Fairness in India and Beyond" by Nithya Sambasivan et al (Jan 2021) with 148 shares. The investigators In this paper, we de-center algorithmic fairness and analyse AI power in India. @annargrs (Anna Rogers) tweeted "Yeah, but SP paper doesn't claim that their list of biases and harms is either exhaustive or universally applicable. I think it's meant as a starting point, for the US context. Here's a nice example of how context-specific these things can be".
This week was very active for "Computer Science - Human-Computer Interaction", with 53 new papers.
The paper discussed most in the news over the past week was by a team at Boston University: "Dissecting the Meme Magic: Understanding Indicators of Virality in Image Memes" by Chen Ling et al (Jan 2021), which was referenced 3 times, including in the article AI Finds Out Formula To Make Memes As Viral As The Bernie Sanders One in TAXI. The paper author, Chen Ling (Hangzhou Dianzi University), was quoted saying "Image memes that are poorly composed are unlikely to be re-shared and go viral, compared to a high-contrast classic portrait that catches viewer’s attention in a museum, for instance". The paper got social media traction with 49 shares. The investigators distinguish image memes that are highly viral on social media from those that do not get re - shared, across three dimensions : composition, subjects, and target audience. On Twitter, @suedemarketing commented "See link attached to read about researchers from prominent universities studying what makes a viral meme. They also developed an AI that can guess if a meme will go viral with 86% accuracy. This is how to stay relevant today!".
This week was very active for "Computer Science - Learning", with 335 new papers.
The paper discussed most in the news over the past week was by a team at Google: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity" by William Fedus et al (Jan 2021)
Leading researcher Oriol Vinyals (DeepMind) came out with "The MineRL 2020 Competition on Sample Efficient Reinforcement Learning using Human Priors" @yapp1e tweeted "The MineRL 2020
The paper shared the most on social media this week is "TT-Rec: Tensor Train Compression for Deep Learning Recommendation Models" by Chunxing Yin et al (Jan 2021)
Over the past week, 16 new papers were published in "Computer Science - Multiagent Systems".
Leading researcher Aaron Courville (Université de Montréal) published "Emergent Communication under Competition".
Over the past week, 28 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was "Can a Fruit Fly Learn Word Embeddings?" by Yuchen Liang et al (Jan 2021), which was referenced 2 times, including in the article Fruit Fly Brain Network Hacked For Language Processing in Discover Magazine. The paper author, Yuchan Liang, was quoted saying "We view this result as an example of a general statement that biologically inspired algorithms might be more compute efficient compared with their classical (non-biological) counterparts". The paper got social media traction with 188 shares. A user, @hurrythas, tweeted "Imagine being a fruit fly, you got like two months to live and mfkrs tryna teach you how to read. Fruits and flying are more important 😤", while @KevinKaichuang said "Learning word embeddings using a neural network inspired by fruit fly brains. Anyways if you need me I'll be teaching a fruit fly to embed proteins".
This week was active for "Computer Science - Robotics", with 51 new papers.
The paper discussed most in the news over the past week was "A Cooperative Dynamic Task Assignment Framework for COTSBot AUVs" by Amin Abbasi et al (Jan 2021), which was referenced 1 time, including in the article Using AUVs to control the outbreak of crown-of-thorns starfish in Australia's Great Barrier Reef in Tech Xplore.
Leading researcher Abhinav Gupta (Carnegie Mellon University) published "droidlet: modular, heterogenous, multi-modal agents".