Eye On AI

View Original

Week Ending 2.7.2021

RESEARCH WATCH: 2.7.2021

This week was active for "Computer Science", with 1,163 new papers.

  • The paper discussed most in the news over the past week was "An Early Look at the Parler Online Social Network" by Max Aliapoulios et al (Jan 2021), which was referenced 139 times, including in the article Parler talk got darker as Trump spoke Jan. 6 in Poughkeepsie Journal. The paper author, Jeremy Blackburn (Binghamton University), was quoted saying "This uptick in sentiment, especially its seeming relationship to the prayer, can be interpreted as an uplifting rallying cry". The paper got social media traction with 55 shares. A user, @PatrickFGleason, tweeted "The unsurprising #Parler post-mortem: analysis of 120M posts (from 2.1M users) verifies the dominance of conspiracy-spewing amplification of Trump-speak".

  • Leading researcher Kyunghyun Cho (New York University) came out with "Self-Supervised Equivariant Scene Synthesis from Video".

  • The paper shared the most on social media this week is by a team at Stanford University: "Embodied Intelligence via Learning and Evolution" by Agrim Gupta et al (Feb 2021) with 251 shares. @dkislyuk (Dmitry Kislyuk) tweeted "This seems like a fascinating research direction. Evolution-based "outer loop", optimizer-based "inner loop" (ideally that balance can be learned by the system), with the truly creative human-engineered parts being the genotype design space + task list. Simulated Baldwin effect!".

This week was very active for "Computer Science - Artificial Intelligence", with 191 new papers.

  • The paper discussed most in the news over the past week was by a team at Google: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity" by William Fedus et al (Jan 2021), which was referenced 16 times, including in the article AI and the List of Dirty, Naughty … and Otherwise Bad Words in Wired News. The paper author, William Agnew, was quoted saying "Words on the list are many times used in very offensive ways but they can also be appropriate depending on context and your identity." The paper also got the most social media traction with 798 shares. A user, @LiamFedus, tweeted "Pleased to share new work! We design a sparse language model that scales beyond a trillion parameters. These versions are significantly more sample efficient and obtain up to 4-7x speed-ups over popular models like T5-Base, T5-Large, T5-XXL. Preprint".

  • The paper shared the most on social media this week is by a team at Carnegie Mellon University: "The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics" by Sebastian Gehrmann et al (Feb 2021) with 242 shares. @huggingface (Hugging Face) tweeted "Models that can classify text are great (RIP GLUE 😜), but how good are we actually at generating language? 💎GEM will help answer this question by contrasting models and evaluation methods in several languages We were super proud to help set it up along with a fantastic team!".

Over the past week, 201 new papers were published in "Computer Science - Computer Vision and Pattern Recognition".

Over the past week, 25 new papers were published in "Computer Science - Computers and Society".

This week was very active for "Computer Science - Human-Computer Interaction", with 41 new papers.

  • The paper discussed most in the news over the past week was "Auditing E-Commerce Platforms for Algorithmically Curated Vaccine Misinformation" by Prerna Juneja et al (Jan 2021), which was referenced 4 times, including in the article Amazon’s search algorithm spreads vaccine disinformation, study finds in The Next Web. The paper author, Prerna Juneja, was quoted saying "We found out that once users start engaging with misinformation on the platform, they are presented with more misinformation at various points in their Amazon navigation route". The paper got social media traction with 25 shares. A Twitter user, @JMarkOckerbloom, said "Link to preprint discussed in the story: Among the findings are that misinforming books often get ranked higher than books that debunk them, and that you get notably more misinformation recommendations once you click on one of them (even if you don't buy)".

This week was very active for "Computer Science - Learning", with 418 new papers.

Over the past week, 16 new papers were published in "Computer Science - Multiagent Systems".

Over the past week, 27 new papers were published in "Computer Science - Neural and Evolutionary Computing".

  • The paper discussed most in the news over the past week was "Can a Fruit Fly Learn Word Embeddings?" by Yuchen Liang et al (Jan 2021), which was referenced 3 times, including in the article Radar trends to watch: February 2021 in O'Reilly Network. The paper author, Yuchan Liang, was quoted saying "We view this result as an example of a general statement that biologically inspired algorithms might be more compute efficient compared with their classical (non-biological) counterparts". The paper got social media traction with 195 shares. A Twitter user, @hurrythas, commented "Imagine being a fruit fly, you got like two months to live and mfkrs tryna teach you how to read. Fruits and flying are more important 😤", while @KevinKaichuang commented "Learning word embeddings using a neural network inspired by fruit fly brains. Anyways if you need me I'll be teaching a fruit fly to embed proteins".

  • The paper shared the most on social media this week is by a team at Stanford University: "Embodied Intelligence via Learning and Evolution" by Agrim Gupta et al (Feb 2021)

This week was active for "Computer Science - Robotics", with 55 new papers.


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.