Eye On AI

View Original

Week Ending 6.21.2020

RESEARCH WATCH: 6.21.2020

This week was active for "Computer Science - Artificial Intelligence", with 136 new papers.

This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 284 new papers.

  • The paper discussed most in the news over the past week was by a team at Google: "SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving" by Zhenpei Yang et al (May 2020), which was referenced 20 times, including in the article Google at CVPR 2020 in Google AI Blog. The paper got social media traction with 57 shares. The researchers present a simple yet effective approach to generate realistic scenario sensor data, based only on a limited amount of lidar and camera data collected by an autonomous vehicle. A Twitter user, @UnHedgedChatter, observed ""To the best of our knowledge, we have built the first purely data-driven camera simulation system for autonomous driving." GOOGGOOGGOOGL Source".

  • Leading researcher Sergey Levine (University of California, Berkeley) published "RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real" @kthnyt tweeted "Been wondering for a while about the robustness of sim to real. Would love to test this for non-vision tasks".

  • The paper shared the most on social media this week is by a team at Google: "Big Self-Supervised Models are Strong Semi-Supervised Learners" by Ting Chen et al (Jun 2020) with 291 shares. @evgeniyzhe (Evgenii Zheltonozhskii) tweeted "Self-supervised learning sees a strong boost in performance: first BYOD and now SimCLRv2 by Starts to make sense to talk about top-1 performace on 1% of ImageNet -- 76.6% for top model".

This week was very active for "Computer Science - Computers and Society", with 46 new papers.

  • The paper discussed most in the news over the past week was "Iterative Effect-Size Bias in Ridehailing: Measuring Social Bias in Dynamic Pricing of 100 Million Rides" by Akshat Pandey et al (Jun 2020), which was referenced 4 times, including in the article Uber and Lyft pricing algorithms charge more in non-white areas in New Scientist. The paper author, Akshat Pandey, was quoted saying "On the surface, dynamic pricing is about supply and demand — but because of the way cities are often segregated by age, race, or income, this can lead to bias that is unintentionally split by neighborhood demographics." The paper got social media traction with 53 shares. The authors develop a random - effects based metric for the analysis of social bias in supervised machine learning prediction models where model outputs depend on U.S. locations. A Twitter user, @DavidZipper, observed "Analyzing 100 million Chicago ride hail trips, researchers found significant evidence of bias. Algorithms used by Uber/Lyft/Via led to higher fares for those going to neighborhoods with a high share of minority or older residents, for example. DL link".

This week was active for "Computer Science - Human-Computer Interaction", with 29 new papers.

This week was extremely active for "Computer Science - Learning", with 737 new papers.

  • The paper discussed most in the news over the past week was by a team at IBM: "Calibrating Healthcare AI: Towards Reliable and Interpretable Deep Predictive Models" by Jayaraman J. Thiagarajan et al (Apr 2020), which was referenced 5 times, including in the article Confidence Calibration for a more reliable AI in AI-Med.io. The paper author, Jay Thiagarajan, was quoted saying "We were exploring how to make a tool that can potentially support more sophisticated reasoning or inferencing. These AI models systematically provide ways to gain new insights by placing your hypothesis in a prediction space. The question is ‘How should the image look if a person has been diagnosed with a condition A versus condition B?’ Our method can provide the most plausible or meaningful evidence for that hypothesis. We can even obtain a continuous transition of a patient from state A to state B, where the expert or a doctor defines what those states are". The paper got social media traction with 13 shares. The researchers argue that these two objectives are not necessarily disparate and propose to utilize prediction calibration to meet both objectives. A user, @arXiv__ml, tweeted "#machinelearning The wide-spread adoption of representation learning technologies in clinical decision making strongly emph".

  • Leading researcher Yoshua Bengio (Université de Montréal) came out with "Untangling tradeoffs between recurrence and self-attention in neural networks" The investigators present a formal analysis of how self - attention affects gradient propagation in recurrent networks, and prove that it mitigates the problem of vanishing gradients when trying to capture long - term dependencies.

  • The paper shared the most on social media this week is by a team at Google: "Big Self-Supervised Models are Strong Semi-Supervised Learners" by Ting Chen et al (Jun 2020)

Over the past week, 19 new papers were published in "Computer Science - Multiagent Systems".

  • The paper discussed most in the news over the past week was by a team at DeepMind: "Learning to Play No-Press Diplomacy with Best Response Policy Iteration" by Thomas Anthony et al (Jun 2020), which was referenced 2 times, including in the article AI winter; Tech at the protests; A Q&A with Duo cofounder Dug Song in Morning Brew. The paper got social media traction with 29 shares. A user, @RHamptonCISSP, tweeted "Deep reinforcement learning algorithms play Diplomacy, a seven-player #boardgame No earth-shattering conclusions, but expect follow up papers as more research is done", while @mathislohaus commented "Quick note for #PoliSciTwitter / IR nerds who got excited about playing Diplomacy: they chose "no press" (no communication between players). --> it's like a *very* fancy chess computer, but hasn't quite managed the art of political intrigue :-)".

Over the past week, 35 new papers were published in "Computer Science - Neural and Evolutionary Computing".

This week was active for "Computer Science - Robotics", with 55 new papers.

  • The paper discussed most in the news over the past week was by a team at University of California, Berkeley: "Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos" by Ajay Kumar Tanwani et al (May 2020), which was referenced 15 times, including in the article Robotic Surgeons That Learn From Videos in Analytics India Magazine. The paper got social media traction with 28 shares. The researchers learn a motion - centric representation of surgical video demonstrations by grouping them into action segments/sub - goals/options in a semi - supervised manner. A Twitter user, @ShanthaRMohan, said ""the team needed just 78 videos from the JIGSAWS database to train their AI to perform its task with 85.5 percent segmentation accuracy and an average 0.94 centimeter error in targeting accuracy."", while @EdwardDixon3 said "A long way to go before it can pull on emergency torch-lit C-section, but really interesting to see this (simulated) stitching. Very label-efficient learning. Paper here: #IamIntel".

  • Leading researcher Pieter Abbeel (University of California, Berkeley) published "Automatic Curriculum Learning through Value Disagreement".

  • The paper shared the most on social media this week is by a team at UC Berkeley: "Accelerating Online Reinforcement Learning with Offline Datasets" by Ashvin Nair et al (Jun 2020) with 75 shares.


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.