Eye On AI

View Original

Week Ending 3.20.2022

RESEARCH WATCH: 3.20.2022

SPONSORED BY

Clear.ML is an open-source MLOps solution. Whether you're a Data Engineer, ML engineer, DevOps, or a Data Scientist, ClearML is hands-down the best collaborative MLOps tool with full visibility and extensibility.

 This week was active for "Computer Science", with 1,404 new papers.

This week was very active for "Computer Science - Artificial Intelligence", with 228 new papers.

  • The paper discussed most in the news over the past week was "Compute Trends Across Three Eras of Machine Learning" by Jaime Sevilla et al (Feb 2022), which was referenced 12 times, including in the article Why changing computing trends across different ML eras matter in Analytics India Magazine. The paper author, Tamay Besiroglu, was quoted saying "Seeing so many prominent machine learning folks ridiculing this idea is disappointing". The paper also got the most social media traction with 494 shares. The investigators study trends in the most readily quantified factor - compute. A Twitter user, @TShevlane, posted "Remember the year 2010? We now have AI systems that take roughly 10 billion times more compute to train than back then. Seems like an important shift!", while @ohlennart observed "Compared to AI and Compute we find a slower, but still tremendous, doubling rate of 6 months instead of their 3.4 months. We analyze this difference in Appendix E 5/".

  • Leading researcher Pieter Abbeel (UC Berkeley) came out with "SURF: Semi-supervised Reward Learning with Data Augmentation for Feedback-efficient Preference-based Reinforcement Learning".

  • The paper shared the most on social media this week is "Efficient Language Modeling with Sparse all-MLP" by Ping Yu et al (Mar 2022) with 231 shares. The investigators analyze the limitations of MLPs in expressiveness, and propose sparsely activated MLPs with mixture - of - experts (MoEs) in both feature and input (token) dimensions. @kawamuramasahar (川村正春 @ 五城目人工知能アカデミー) tweeted "propose sparsely activated MLPs with mixture-of-experts (MoEs) in both feature and input dimensions improves language modeling perplexity and obtains up to 2x improvement in training efficiency compared to Transformer-based MoEs and dense Transformers".

  • The most influential Twitter user discussing papers is Jordan Ellenberg who shared "Bertrands Postulate for Carmichael Numbers" by Daniel Larsen (Nov 2021) and said: "Here's Larsen's paper on Carmichael numbers, which finished 4th in this year's Regeneron".

This week was very active for "Computer Science - Computer Vision and Pattern Recognition", with 415 new papers.

Over the past week, 26 new papers were published in "Computer Science - Computers and Society".

This week was very active for "Computer Science - Human-Computer Interaction", with 36 new papers.

This week was extremely active for "Computer Science - Learning", with 477 new papers.

This week was active for "Computer Science - Multiagent Systems", with 26 new papers.

Over the past week, 18 new papers were published in "Computer Science - Neural and Evolutionary Computing".

This week was very active for "Computer Science - Robotics", with 104 new papers.


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.