Week Ending 2.7.2021
RESEARCH WATCH: 2.7.2021
This week was active for "Computer Science", with 1,163 new papers.
The paper discussed most in the news over the past week was "An Early Look at the Parler Online Social Network" by Max Aliapoulios et al (Jan 2021), which was referenced 139 times, including in the article Parler talk got darker as Trump spoke Jan. 6 in Poughkeepsie Journal. The paper author, Jeremy Blackburn (Binghamton University), was quoted saying "This uptick in sentiment, especially its seeming relationship to the prayer, can be interpreted as an uplifting rallying cry". The paper got social media traction with 55 shares. A user, @PatrickFGleason, tweeted "The unsurprising #Parler post-mortem: analysis of 120M posts (from 2.1M users) verifies the dominance of conspiracy-spewing amplification of Trump-speak".
Leading researcher Kyunghyun Cho (New York University) came out with "Self-Supervised Equivariant Scene Synthesis from Video".
The paper shared the most on social media this week is by a team at Stanford University: "Embodied Intelligence via Learning and Evolution" by Agrim Gupta et al (Feb 2021) with 251 shares. @dkislyuk (Dmitry Kislyuk) tweeted "This seems like a fascinating research direction. Evolution-based "outer loop", optimizer-based "inner loop" (ideally that balance can be learned by the system), with the truly creative human-engineered parts being the genotype design space + task list. Simulated Baldwin effect!".
This week was very active for "Computer Science - Artificial Intelligence", with 191 new papers.
The paper discussed most in the news over the past week was by a team at Google: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity" by William Fedus et al (Jan 2021), which was referenced 16 times, including in the article AI and the List of Dirty, Naughty … and Otherwise Bad Words in Wired News. The paper author, William Agnew, was quoted saying "Words on the list are many times used in very offensive ways but they can also be appropriate depending on context and your identity." The paper also got the most social media traction with 798 shares. A user, @LiamFedus, tweeted "Pleased to share new work! We design a sparse language model that scales beyond a trillion parameters. These versions are significantly more sample efficient and obtain up to 4-7x speed-ups over popular models like T5-Base, T5-Large, T5-XXL. Preprint".
The paper shared the most on social media this week is by a team at Carnegie Mellon University: "The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics" by Sebastian Gehrmann et al (Feb 2021) with 242 shares. @huggingface (Hugging Face) tweeted "Models that can classify text are great (RIP GLUE 😜), but how good are we actually at generating language? 💎GEM will help answer this question by contrasting models and evaluation methods in several languages We were super proud to help set it up along with a fantastic team!".
Over the past week, 201 new papers were published in "Computer Science - Computer Vision and Pattern Recognition".
The paper discussed most in the news over the past week was by a team at McGill University: "COVID-19 Prognosis via Self-Supervised Representation Learning and Multi-Image Prediction" by Anuroop Sriram et al (Jan 2021), which was referenced 4 times, including in the article Facebook & NYU reduce Covid hospital strain — Covid Prognosis Via Self-Supervised Learning in Towards Data Science. The paper also got the most social media traction with 325 shares. A Twitter user, @RaaDean, observed "This is amazing, I wonder if it can predict if we can catch COVID #100DaysOfCode", while @HelpedHope posted "Really impressive. Hope it works!".
Leading researcher Kyunghyun Cho (New York University) published "Self-Supervised Equivariant Scene Synthesis from Video".
The paper shared the most on social media this week is by a team at University of North Carolina at Chapel Hill: "Unifying Vision-and-Language Tasks via Text Generation" by Jaemin Cho et al (Feb 2021) with 96 shares.
Over the past week, 25 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was "An Early Look at the Parler Online Social Network" by Max Aliapoulios et al (Jan 2021)
The paper shared the most on social media this week is "The Privatization of AI Research(-ers): Causes and Potential Consequences -- From university-industry interaction to public research brain-drain?" by Roman Jurowetzki et al (Feb 2021) with 92 shares. The investigators analyze the causes and discuss potential consequences of perceived privatization of AI research, particularly the transition of AI researchers from academia to industry. @aaron_j_chavez (Aaron Chavez ☀️) tweeted "Interesting! As AI research gets more and more expensive, it's only going to keep getting concentrated in industry. What would it look like to have something akin to CERN's particle physics lab but for AI research? "CERN for AI"".
This week was very active for "Computer Science - Human-Computer Interaction", with 41 new papers.
The paper discussed most in the news over the past week was "Auditing E-Commerce Platforms for Algorithmically Curated Vaccine Misinformation" by Prerna Juneja et al (Jan 2021), which was referenced 4 times, including in the article Amazon’s search algorithm spreads vaccine disinformation, study finds in The Next Web. The paper author, Prerna Juneja, was quoted saying "We found out that once users start engaging with misinformation on the platform, they are presented with more misinformation at various points in their Amazon navigation route". The paper got social media traction with 25 shares. A Twitter user, @JMarkOckerbloom, said "Link to preprint discussed in the story: Among the findings are that misinforming books often get ranked higher than books that debunk them, and that you get notably more misinformation recommendations once you click on one of them (even if you don't buy)".
This week was very active for "Computer Science - Learning", with 418 new papers.
The paper discussed most in the news over the past week was by a team at Google: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity" by William Fedus et al (Jan 2021)
Leading researcher Sergey Levine (University of California, Berkeley) came out with "How to Train Your Robot with Deep Reinforcement Learning; Lessons Weve Learned".
The paper shared the most on social media this week is by a team at Stanford University: "Embodied Intelligence via Learning and Evolution" by Agrim Gupta et al (Feb 2021)
Over the past week, 16 new papers were published in "Computer Science - Multiagent Systems".
The paper shared the most on social media this week is "baller2vec: A Multi-Entity Transformer For Multi-Agent Spatiotemporal Modeling" by Michael A. Alcorn et al (Feb 2021) with 59 shares.
Over the past week, 27 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was "Can a Fruit Fly Learn Word Embeddings?" by Yuchen Liang et al (Jan 2021), which was referenced 3 times, including in the article Radar trends to watch: February 2021 in O'Reilly Network. The paper author, Yuchan Liang, was quoted saying "We view this result as an example of a general statement that biologically inspired algorithms might be more compute efficient compared with their classical (non-biological) counterparts". The paper got social media traction with 195 shares. A Twitter user, @hurrythas, commented "Imagine being a fruit fly, you got like two months to live and mfkrs tryna teach you how to read. Fruits and flying are more important 😤", while @KevinKaichuang commented "Learning word embeddings using a neural network inspired by fruit fly brains. Anyways if you need me I'll be teaching a fruit fly to embed proteins".
The paper shared the most on social media this week is by a team at Stanford University: "Embodied Intelligence via Learning and Evolution" by Agrim Gupta et al (Feb 2021)
This week was active for "Computer Science - Robotics", with 55 new papers.
The paper discussed most in the news over the past week was by a team at OpenAI: "Asymmetric self-play for automatic goal discovery in robotic manipulation" by OpenAI OpenAI et al (Jan 2021), which was referenced 3 times, including in the article Best of arXiv — January 2021 in Towards Data Science. The paper got social media traction with 102 shares. A user, @lilianweng, tweeted "Our paper on training a single goal-conditioned policy 100% with asymmetric self-play to generalize to many unseen objects and tasks: and more cool videos are available at (The attached video is zero-shot)".
Leading researcher Sergey Levine (University of California, Berkeley) published "How to Train Your Robot with Deep Reinforcement Learning; Lessons Weve Learned".
The paper shared the most on social media this week is by a team at Stanford University: "Embodied Intelligence via Learning and Evolution" by Agrim Gupta et al (Feb 2021)