Week Ending 8.2.2020
RESEARCH WATCH: 8.2.2020
This week was active for "Computer Science - Artificial Intelligence", with 98 new papers.
The paper discussed most in the news over the past week was "Iterative Effect-Size Bias in Ridehailing: Measuring Social Bias in Dynamic Pricing of 100 Million Rides" by Akshat Pandey et al (Jun 2020), which was referenced 17 times, including in the article How Machine Learning is Influencing Diversity & Inclusion in Information Week. The paper author, Aylin Caliskan (George Washington University), was quoted saying "When machine learning is applied to social data, the algorithms learn the statistical regularities of the historical injustices and social biases embedded in these data sets". The paper got social media traction with 78 shares. The authors develop a random - effects based metric for the analysis of social bias in supervised machine learning prediction models where model outputs depend on U.S. locations. A user, @DavidZipper, tweeted "Analyzing 100 million Chicago ride hail trips, researchers found significant evidence of bias. Algorithms used by Uber/Lyft/Via led to higher fares for those going to neighborhoods with a high share of minority or older residents, for example. DL link".
The paper shared the most on social media this week is by a team at Google: "Towards Learning Convolutions from Scratch" by Behnam Neyshabur (Jul 2020) with 285 shares. @hardmaru (hardmaru) tweeted "Towards Learning Convolutions from Scratch “As ML moves towards reducing the expert bias and learning it from data, a natural next step seems to be learning convolution-like structures from scratch.” Would be great to find the "ConvNet" for new domains".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 268 new papers.
The paper discussed most in the news over the past week was by a team at Stanford University: "Pruning neural networks without any data by iteratively conserving synaptic flow" by Hidenori Tanaka et al (Jun 2020), which was referenced 24 times, including in the article Research Opens New Neural Network Model Pathway to Understanding the Brain in TMC Net. The paper author, Hidenori Tanaka et al, was quoted saying "Unlike natural systems that physicists usually deal with, our brain is notoriously complicated and sometimes rejects simple mathematical models". The paper got social media traction with 343 shares. On Twitter, @Hidenori8Tanaka posted "Q. Can we find winning lottery tickets, or sparse trainable deep networks at initialization without ever looking at data? A. Yes, by conserving "Synaptic Flow" via our new SynFlow algorithm. co-led with Daniel Kunin & paper: 1/".
Leading researcher Abhinav Gupta (Carnegie Mellon University) came out with "Demystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases" The investigators first present quantitative experiments to demystify these gains.
The paper shared the most on social media this week is by a team at Google: "Towards Learning Convolutions from Scratch" by Behnam Neyshabur (Jul 2020)
This week was active for "Computer Science - Computers and Society", with 43 new papers.
The paper discussed most in the news over the past week was "Iterative Effect-Size Bias in Ridehailing: Measuring Social Bias in Dynamic Pricing of 100 Million Rides" by Akshat Pandey et al (Jun 2020)
The paper shared the most on social media this week is by a team at University of Toronto: "Ethics of Artificial Intelligence in Surgery" by Frank Rudzicz et al (Jul 2020) with 72 shares. The authors discuss the four key principles of bio - medical ethics from surgical context. @Laparoscopes (Dan Hashimoto, MD MS) tweeted "Important chapter on #ethics in the context of and #surgery! Thanks & for contributing! AI in Surgery: An AI Primer for Surgical Practice coming soon!".
This week was very active for "Computer Science - Human-Computer Interaction", with 46 new papers.
The paper shared the most on social media this week is by a team at Hong Kong University of Science and Technology: "Visual Analysis of Discrimination in Machine Learning" by Qianwen Wang et al (Jul 2020) with 70 shares. The authors investigate discrimination in machine learning from a visual analytics perspective and propose an interactive visualization tool, DiscriLens, to support a more comprehensive analysis.
This week was very active for "Computer Science - Learning", with 362 new papers.
The paper discussed most in the news over the past week was by a team at Stanford University: "Pruning neural networks without any data by iteratively conserving synaptic flow" by Hidenori Tanaka et al (Jun 2020)
Leading researcher Yoshua Bengio (Université de Montréal) came out with "Deriving Differential Target Propagation from Iterating Approximate Inverses".
The paper shared the most on social media this week is by a team at Google: "Towards Learning Convolutions from Scratch" by Behnam Neyshabur (Jul 2020)
Over the past week, eight new papers were published in "Computer Science - Multiagent Systems".
Over the past week, 18 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was by a team at DeepMind: "Strong Generalization and Efficiency in Neural Programs" by Yujia Li et al (Jul 2020), which was referenced 3 times, including in the article A neural network that spots similarities between programs could help computers code themselves in Technology Review. The paper got social media traction with 423 shares. On Twitter, @hardmaru said "Their learned programs can outperform hand-coded programs in terms of efficiency on several algorithmic tasks, such as sorting, searching in ordered lists and a version of the 0/1 knapsack problem, while also generalizing to instances of arbitrary length".
This week was very active for "Computer Science - Robotics", with 67 new papers.