Week Ending 2.28.2021
RESEARCH WATCH: 2.28.2021
This week was active for "Computer Science", with 1,161 new papers.
The paper discussed most in the news over the past week was "Empowering Patients Using Smart Mobile Health Platforms: Evidence From A Randomized Field Experiment" by Anindya Ghose et al (Feb 2021), which was referenced 21 times, including in the article Diabetes patients use of mobile health app found to improve health outcomes, lower medical costs in EurekAlert!. The paper author, Beibei Li (Carnegie Mellon University), was quoted saying "Given the importance of health behaviors to well-being, health outcomes, and disease processes, mHealth technologies offer significant potential to facilitate patients' lifestyle and behavior modification through patient education, improved autonomous self-regulation, and perceived competence". The paper got social media traction with 7 shares. The authors examine mobile health (mHealth) platforms and their health and economic impacts on the outcomes of chronic disease patients. On Twitter, @aghose commented "We hope our findings will trigger conversations with policy makers to consider wide spread distribution of wearable devices and mhealth apps at subsidized prices so as to benefit larger segments of the population. Full paper is here".
Leading researcher Yoshua Bengio (Université de Montréal) published "Towards Causal Representation Learning" @NalKalchbrenner tweeted "Causality in ML is one of those slippery concepts that are hard to get a good grip on - a bit like the concepts of consciousness and perhaps truth. This paper makes an attempt 👇".
The paper shared the most on social media this week is by a team at Google: "How to represent part-whole hierarchies in a neural network" by Geoffrey Hinton (Feb 2021) with 1048 shares. The authors do not describe a working system. @CSProfKGD (Kosta Derpanis) tweeted "Back to where it all started Geoff Hinton’s first paper".
This week was very active for "Computer Science - Artificial Intelligence", with 203 new papers.
The paper discussed most in the news over the past week was by a team at Google: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity" by William Fedus et al (Jan 2021), which was referenced 18 times, including in the article GPT-3: We’re at the very beginning of a new app ecosystem in Venturebeat. The paper author, William Agnew, was quoted saying "Words on the list are many times used in very offensive ways but they can also be appropriate depending on context and your identity." The paper also got the most social media traction with 832 shares. A Twitter user, @LiamFedus, observed "Pleased to share new work! We design a sparse language model that scales beyond a trillion parameters. These versions are significantly more sample efficient and obtain up to 4-7x speed-ups over popular models like T5-Base, T5-Large, T5-XXL. Preprint".
Leading researcher Yoshua Bengio (Université de Montréal) published "Towards Causal Representation Learning" @NalKalchbrenner tweeted "Causality in ML is one of those slippery concepts that are hard to get a good grip on - a bit like the concepts of consciousness and perhaps truth. This paper makes an attempt 👇". This paper was also shared the most on social media with 390 tweets. @NalKalchbrenner (Nal) tweeted "Causality in ML is one of those slippery concepts that are hard to get a good grip on - a bit like the concepts of consciousness and perhaps truth. This paper makes an attempt 👇".
Over the past week, 200 new papers were published in "Computer Science - Computer Vision and Pattern Recognition".
The paper discussed most in the news over the past week was by a team at DeepMind: "High-Performance Large-Scale Image Recognition Without Normalization" by Andrew Brock et al (Feb 2021), which was referenced 7 times, including in the article Data Science Weekly Newsletter - Issue 410 (Feb 18, 2021) in Data Science Weekly. The paper also got the most social media traction with 1076 shares. The authors develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer - Free ResNets. A Twitter user, @sohamde_, posted "Releasing NFNets: SOTA on ImageNet. Without normalization layers! Code: This is the third paper in a series that began by studying the benefits of BatchNorm and ended by designing highly performant networks w/o it. A thread: 1/8".
Leading researcher Ilya Sutskever (OpenAI) published "Zero-Shot Text-to-Image Generation", which had 24 shares over the past 3 days. @poolio tweeted "One of the tricks to learn better reconstructions for the discrete VAE in DALL-E is to use beta > 1. Didn't expect that one 🤔".
The paper shared the most on social media this week is by a team at Google: "How to represent part-whole hierarchies in a neural network" by Geoffrey Hinton (Feb 2021)
Over the past week, 29 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was by a team at Carnegie Mellon University: "Gender Bias, Social Bias and Representation: 70 Years of B$^H$ollywood" by Kunal Khadilkar et al (Feb 2021), which was referenced 5 times, including in the article Bollywood Movies Still Connect Beauty with Fair Skin, Reveals AI-Based Study in Beebom. The paper author, Ashiqur R. KhudaBukhsh (Carnegie Mellon University), was quoted saying "All of these things we kind of knew, but now we have numbers to quantify them". The paper got social media traction with 10 shares. A user, @KunalKhadilkar, tweeted "Hi Alison, I am so glad you found our research interesting. I would love to talk more about how we used linguistic techniques for uncovering social biases. The full paper is available at".
This week was active for "Computer Science - Human-Computer Interaction", with 26 new papers.
The paper discussed most in the news over the past week was "Empowering Patients Using Smart Mobile Health Platforms: Evidence From A Randomized Field Experiment" by Anindya Ghose et al (Feb 2021)
This week was very active for "Computer Science - Learning", with 448 new papers.
The paper discussed most in the news over the past week was by a team at Google: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity" by William Fedus et al (Jan 2021)
Leading researcher Yoshua Bengio (Université de Montréal) came out with "Towards Causal Representation Learning" @NalKalchbrenner tweeted "Causality in ML is one of those slippery concepts that are hard to get a good grip on - a bit like the concepts of consciousness and perhaps truth. This paper makes an attempt 👇".
The paper shared the most on social media this week is by a team at OpenAI: "Zero-Shot Text-to-Image Generation" by Aditya Ramesh et al (Feb 2021) with 694 shares. @poolio (Ben Poole) tweeted "One of the tricks to learn better reconstructions for the discrete VAE in DALL-E is to use beta > 1. Didn't expect that one 🤔".
This week was active for "Computer Science - Multiagent Systems", with 23 new papers.
Over the past week, 25 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was "BENDR: using transformers and a contrastive self-supervised learning task to learn from massive amounts of EEG data" by Demetres Kostas et al (Jan 2021), which was referenced 2 times, including in the article BENDR for BCI: UToronto’s BERT-Inspired DNN Training Approach Learns From Unlabelled EEG Data in SyncedReview.com. The paper got social media traction with 5 shares.
This week was very active for "Computer Science - Robotics", with 85 new papers.
The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "Digger Finger: GelSight Tactile Sensor for Object Identification Inside Granular Media" by Radhen Patel et al (Feb 2021), which was referenced 2 times, including in the article 'Digging' robot can locate objects concealed by granular media in GlobalSpec. The paper was shared 1 time in social media. The researchers present an early prototype of the Digger Finger that is designed to easily penetrate granular media and is equipped with the GelSight sensor.
Leading researcher Pieter Abbeel (University of California, Berkeley) came out with "Task-Agnostic Morphology Evolution".