Week Ending 2.28.2021

 

RESEARCH WATCH: 2.28.2021

 
ai-research.png

This week was active for "Computer Science", with 1,161 new papers.

  • The paper discussed most in the news over the past week was "Empowering Patients Using Smart Mobile Health Platforms: Evidence From A Randomized Field Experiment" by Anindya Ghose et al (Feb 2021), which was referenced 21 times, including in the article Diabetes patients use of mobile health app found to improve health outcomes, lower medical costs in EurekAlert!. The paper author, Beibei Li (Carnegie Mellon University), was quoted saying "Given the importance of health behaviors to well-being, health outcomes, and disease processes, mHealth technologies offer significant potential to facilitate patients' lifestyle and behavior modification through patient education, improved autonomous self-regulation, and perceived competence". The paper got social media traction with 7 shares. The authors examine mobile health (mHealth) platforms and their health and economic impacts on the outcomes of chronic disease patients. On Twitter, @aghose commented "We hope our findings will trigger conversations with policy makers to consider wide spread distribution of wearable devices and mhealth apps at subsidized prices so as to benefit larger segments of the population. Full paper is here".

  • Leading researcher Yoshua Bengio (Université de Montréal) published "Towards Causal Representation Learning" @NalKalchbrenner tweeted "Causality in ML is one of those slippery concepts that are hard to get a good grip on - a bit like the concepts of consciousness and perhaps truth. This paper makes an attempt 👇".

  • The paper shared the most on social media this week is by a team at Google: "How to represent part-whole hierarchies in a neural network" by Geoffrey Hinton (Feb 2021) with 1048 shares. The authors do not describe a working system. @CSProfKGD (Kosta Derpanis) tweeted "Back to where it all started Geoff Hinton’s first paper".

This week was very active for "Computer Science - Artificial Intelligence", with 203 new papers.

  • The paper discussed most in the news over the past week was by a team at Google: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity" by William Fedus et al (Jan 2021), which was referenced 18 times, including in the article GPT-3: We’re at the very beginning of a new app ecosystem in Venturebeat. The paper author, William Agnew, was quoted saying "Words on the list are many times used in very offensive ways but they can also be appropriate depending on context and your identity." The paper also got the most social media traction with 832 shares. A Twitter user, @LiamFedus, observed "Pleased to share new work! We design a sparse language model that scales beyond a trillion parameters. These versions are significantly more sample efficient and obtain up to 4-7x speed-ups over popular models like T5-Base, T5-Large, T5-XXL. Preprint".

  • Leading researcher Yoshua Bengio (Université de Montréal) published "Towards Causal Representation Learning" @NalKalchbrenner tweeted "Causality in ML is one of those slippery concepts that are hard to get a good grip on - a bit like the concepts of consciousness and perhaps truth. This paper makes an attempt 👇". This paper was also shared the most on social media with 390 tweets. @NalKalchbrenner (Nal) tweeted "Causality in ML is one of those slippery concepts that are hard to get a good grip on - a bit like the concepts of consciousness and perhaps truth. This paper makes an attempt 👇".

Over the past week, 200 new papers were published in "Computer Science - Computer Vision and Pattern Recognition".

  • The paper discussed most in the news over the past week was by a team at DeepMind: "High-Performance Large-Scale Image Recognition Without Normalization" by Andrew Brock et al (Feb 2021), which was referenced 7 times, including in the article Data Science Weekly Newsletter - Issue 410 (Feb 18, 2021) in Data Science Weekly. The paper also got the most social media traction with 1076 shares. The authors develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer - Free ResNets. A Twitter user, @sohamde_, posted "Releasing NFNets: SOTA on ImageNet. Without normalization layers! Code: This is the third paper in a series that began by studying the benefits of BatchNorm and ended by designing highly performant networks w/o it. A thread: 1/8".

  • Leading researcher Ilya Sutskever (OpenAI) published "Zero-Shot Text-to-Image Generation", which had 24 shares over the past 3 days. @poolio tweeted "One of the tricks to learn better reconstructions for the discrete VAE in DALL-E is to use beta > 1. Didn't expect that one 🤔".

  • The paper shared the most on social media this week is by a team at Google: "How to represent part-whole hierarchies in a neural network" by Geoffrey Hinton (Feb 2021)

Over the past week, 29 new papers were published in "Computer Science - Computers and Society".

This week was active for "Computer Science - Human-Computer Interaction", with 26 new papers.

This week was very active for "Computer Science - Learning", with 448 new papers.

This week was active for "Computer Science - Multiagent Systems", with 23 new papers.

Over the past week, 25 new papers were published in "Computer Science - Neural and Evolutionary Computing".

This week was very active for "Computer Science - Robotics", with 85 new papers.


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.