Week Ending 1.17.2021
RESEARCH WATCH: 1.17.2021
Over the past week, 894 new papers were published in "Computer Science".
The paper discussed most in the news over the past week was "An Early Look at the Parler Online Social Network" by Max Aliapoulios et al (Jan 2021), which was referenced 37 times, including in the article MIL-OSI Global: Does ‘deplatforming’ work to curb hate speech and calls for violence? 3 experts in online communications weigh in in Foreign Affairs.co.nz. The paper got social media traction with 16 shares. A user, @PatrickFGleason, tweeted "The unsurprising #Parler post-mortem: analysis of 120M posts (from 2.1M users) verifies the dominance of conspiracy-spewing amplification of Trump-speak".
Leading researcher Yoshua Bengio (Université de Montréal) published "Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias".
The paper shared the most on social media this week is by a team at Google: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity" by William Fedus et al (Jan 2021) with 596 shares. @moinnadeem (Moin Nadeem) tweeted "Please... make it stop. (I understand why people do this, but man, the Scaling Laws hypothesis is really annoying sometimes)".
The most influential Twitter user discussing papers is Namita who shared "Expected Hypothetical Completion Probability" by Sameer K. Deshpande et al (Oct 2019) and said: "one of my fave sports analytics papers! definitely worth a read and a listen".
This week was active for "Computer Science - Artificial Intelligence", with 126 new papers.
The paper discussed most in the news over the past week was by a team at Google: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity" by William Fedus et al (Jan 2021), which was referenced 7 times, including in the article Google trained a trillion-parameter AI language model in Venturebeat. The paper also got the most social media traction with 596 shares. On Twitter, @LiamFedus commented "Pleased to share new work! We design a sparse language model that scales beyond a trillion parameters. These versions are significantly more sample efficient and obtain up to 4-7x speed-ups over popular models like T5-Base, T5-Large, T5-XXL. Preprint".
Over the past week, 186 new papers were published in "Computer Science - Computer Vision and Pattern Recognition".
The paper discussed most in the news over the past week was by a team at Johns Hopkins University: "A Large-Scale, Time-Synchronized Visible and Thermal Face Dataset" by Domenick Poster et al (Jan 2021), which was referenced 3 times, including in the article The US Army is developing a nightmarish thermal facial recognition system in The Next Web. The paper got social media traction with 23 shares. On Twitter, @AASchapiro commented "The US Army...night-time facial recognition...what could go wrong? A paper by researchers at the University of West Virginia describes the Army Research Laboratory's 500,000 image "Thermal Face Dataset." h/t".
Leading researcher Dhruv Batra (Georgia Institute of Technology) came out with "Memory-Augmented Reinforcement Learning for Image-Goal Navigation" The researchers address the problem of image - goal navigation in the context of visually - realistic 3D environments.
The paper shared the most on social media this week is by a team at McGill University: "COVID-19 Deterioration Prediction via Self-Supervised Representation Learning and Multi-Image Prediction" by Anuroop Sriram et al (Jan 2021) with 288 shares. @RaaDean (shena dean) tweeted "This is amazing, I wonder if it can predict if we can catch COVID #100DaysOfCode".
The most influential Twitter user discussing papers is Namita who shared "Expected Hypothetical Completion Probability" by Sameer K. Deshpande et al (Oct 2019)
This week was very active for "Computer Science - Computers and Society", with 46 new papers.
The paper discussed most in the news over the past week was "An Early Look at the Parler Online Social Network" by Max Aliapoulios et al (Jan 2021)
The paper shared the most on social media this week is by a team at Princeton University: "What Makes a Dark Pattern... Dark? Design Attributes, Normative Considerations, and Measurement Methods" by Arunesh Mathur et al (Jan 2021) with 90 shares. @1lucabelli (Luca Belli) tweeted "A timely analysis on #darkpatterns user interface design and their effects on individuals and society 👇👇👇 Looks extremely interesting! #dataprotection #consumerprotection".
This week was very active for "Computer Science - Human-Computer Interaction", with 47 new papers.
The paper discussed most in the news over the past week was by a team at Microsoft: "Evaluating the Robustness of Collaborative Agents" by Paul Knott et al (Jan 2021), which was referenced 1 time, including in the article Researchers propose using the game Overcooked to benchmark collaborative AI systems in BusinessIntelligenceInfo.com. The paper got social media traction with 19 shares. On Twitter, @gastronomy commented "> In order for agents trained by deep reinforcement learning to work alongside humans in realistic settings, we will need to ensure that the agents are \emph{robus".
The paper shared the most on social media this week is by a team at Princeton University: "What Makes a Dark Pattern... Dark? Design Attributes, Normative Considerations, and Measurement Methods" by Arunesh Mathur et al (Jan 2021)
This week was very active for "Computer Science - Learning", with 322 new papers.
The paper discussed most in the news over the past week was by a team at Google: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity" by William Fedus et al (Jan 2021)
Leading researcher Yoshua Bengio (Université de Montréal) published "Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias".
Over the past week, 12 new papers were published in "Computer Science - Multiagent Systems".
The paper discussed most in the news over the past week was by a team at Microsoft: "Evaluating the Robustness of Collaborative Agents" by Paul Knott et al (Jan 2021)
Over the past week, 13 new papers were published in "Computer Science - Neural and Evolutionary Computing".
Leading researcher Yoshua Bengio (Université de Montréal) came out with "Scaling Equilibrium Propagation to Deep ConvNets by Drastically Reducing its Gradient Estimator Bias".
This week was active for "Computer Science - Robotics", with 59 new papers.
The paper discussed most in the news over the past week was by a team at Stanford University: "Where2Act: From Pixels to Actions for Articulated 3D Objects" by Kaichun Mo et al (Jan 2021), which was referenced 1 time, including in the article Stanford researchers propose AI that figures out how to use real-world objects in BusinessIntelligenceInfo.com. The paper got social media traction with 43 shares. The authors take a step towards that long - term goal -- they extract highly localized actionable information related to elementary actions such as pushing or pulling for articulated objects with movable parts. A Twitter user, @DataScienceNIG, posted "AI figures out how to use real-world objects. Researchers from & have developed a novel method to predict per-pixel actionable information for manipulating articulated 3D objects using the PartNet-Mobility dataset. Paper".
Leading researcher Sergey Levine (University of California, Berkeley) came out with "SimGAN: Hybrid Simulator Identification for Domain Adaptation via Adversarial Reinforcement Learning".
The paper shared the most on social media this week is by a team at OpenAI: "Asymmetric self-play for automatic goal discovery in robotic manipulation" by OpenAI OpenAI et al (Jan 2021) with 85 shares. @HenriquePonde (Henrique Ponde) tweeted "Very proud that our work is finally out!".