Week Ending 9.20.2020
RESEARCH WATCH: 9.20.2020
This week was very active for "Computer Science - Artificial Intelligence", with 184 new papers.
The paper discussed most in the news over the past week was by a team at DeepMind: "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess" by Nenad Tomašev et al (Sep 2020), which was referenced 8 times, including in the article AI ruined chess. Now it’s making the game beautiful again in ArsTechnica. The paper author, Vladimir Kramnik, was quoted saying "For quite a number of games on the highest level, half of the game—sometimes a full game—is played out of memory. You don't even play your own preparation; you play your computer's preparation." The paper got social media traction with 284 shares. The researchers use AlphaZero to creatively explore and design new chess variants. On Twitter, @debarghya_das observed "1/5 This chess paper from DeepMind and has absolutely consumed my mind in the last few days. They answered a question many chess players have dreamed of - how fair is chess? If you change the rules, does that change?".
Leading researcher Kyunghyun Cho (New York University) came out with "A Systematic Characterization of Sampling Algorithms for Open-ended Language Generation".
The paper shared the most on social media this week is by a team at Google: "The Hardware Lottery" by Sara Hooker (Sep 2020) with 776 shares. @charles_irl (Charles 🎉 Frye) tweeted "Imagine if consumer graphics had required efficient trees, instead of matmuls. I bet we'd all be talking about "deep tree learning" and GPTree for NLP with 175 bn branches! Really nice paper on the "Hardware Lottery" by with historical egs from Babbage to Hinton".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 252 new papers.
The paper discussed most in the news over the past week was "How Do the Hearts of Deep Fakes Beat? Deep Fake Source Detection via Interpreting Residuals with Biological Signals" by Umur Aybars Ciftci et al (Aug 2020), which was referenced 8 times, including in the article Facebook’s Project Aria Is Google Maps — For Your Entire Life in Medium.com. The paper got social media traction with 67 shares. On Twitter, @ilkedemir observed "Our latest paper about #DeepFakes is online! Detection is important, but can we track the generative source? How do the hearts of deep fakes beat? Can we project the residuals to the biological signal domain?".
Leading researcher Pieter Abbeel (UC Berkeley) came out with "Decoupling Representation Learning from Reinforcement Learning" @unsorsodicorda tweeted "Unsupervised Learning + RL *not* trained end-to-end (I.e., features trained first, policy learned after) show promising results for the first time. Interesting paper from team at Berkeley".
The paper shared the most on social media this week is by a team at Google: "Efficient Transformers: A Survey" by Yi Tay et al (Sep 2020) with 647 shares. @rajammanabrolu (Prithviraj Ammanabrolu) tweeted "This is v useful but also imagine there being enough work in a (sub)^n field to make a full survey paper and taxonomy in less than ~2 years 🤯🤯".
This week was active for "Computer Science - Computers and Society", with 30 new papers.
The paper discussed most in the news over the past week was "The Radicalization Risks of GPT-3 and Advanced Neural Language Models" by Kris McGuffie et al (Sep 2020), which was referenced 7 times, including in the article AI Weekly: Cutting-edge language models can produce convincing misinformation if we don’t stop them in Venturebeat. The paper got social media traction with 9 shares. The researchers expand on their previous research of the potential for abuse of generative language models by assessing GPT-3. A Twitter user, @AlexBNewhouse, posted "We now have an arxiv link for our paper on GPT-3! (This is my first arxiv submission, so I’m unreasonably excited even though it’s not that notable)".
The paper shared the most on social media this week is by a team at Google: "The Hardware Lottery" by Sara Hooker (Sep 2020)
This week was very active for "Computer Science - Human-Computer Interaction", with 38 new papers.
The paper discussed most in the news over the past week was by a team at Microsoft: "A Tale of Two Cities: Software Developers Working from Home During the COVID-19 Pandemic" by Denae Ford et al (Aug 2020), which was referenced 1 time, including in the article What’s next for WFH: Tackling techie burnout in Allstream. The paper got social media traction with 31 shares. A Twitter user, @krlis1337, posted "did it again! Another great paper about how we work and collaborate! WFH edition!", while @denaefordrobin observed "Our #WFH paper with responses from 3,634 software devs during the pandemic is now on arXiv! Huge shout out to all my co-authors: Sonia, Chandra, Jenna, Brian, and (cc// )".
This week was very active for "Computer Science - Learning", with 346 new papers.
The paper discussed most in the news over the past week was "TinySpeech: Attention Condensers for Deep Speech Recognition Neural Networks on Edge Devices" by Alexander Wong et al (Aug 2020), which was referenced 20 times, including in the article New ABR Technology Lowers Power Consumption by 94% for Always-On Devices in Newswire.com. The paper author, Alexander Wong (University of Waterloo), was quoted saying "The key takeaways from this research is that not only can self-attention be leveraged to significantly improve the accuracy of deep neural networks, it can also have great ramifications for greatly improving efficiency and robustness of deep neural networks". The paper got social media traction with 12 shares. The researchers introduce the concept of attention condensers for building low - footprint, highly - efficient deep neural networks for on - device speech recognition on the edge. On Twitter, @sheldonfff observed "Important theoretical development from our team allowing AI to employ human-like shortcuts in the interest of efficiency. "I see only one move ahead, but it is always the correct one." – Jose Capablanca, World Chess Champion 1921-27 #darwinai".
Leading researcher Kyunghyun Cho (New York University) came out with "Evaluating representations by the complexity of learning low-loss predictors" @yapp1e tweeted "Evaluating representations by the complexity of learning low-loss predictors. We consider the problem of evaluating representations of data for use in solving a downstream task. We propose to measure".
The paper shared the most on social media this week is by a team at Google: "The Hardware Lottery" by Sara Hooker (Sep 2020)
Over the past week, 13 new papers were published in "Computer Science - Multiagent Systems".
Over the past week, 23 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was "TinySpeech: Attention Condensers for Deep Speech Recognition Neural Networks on Edge Devices" by Alexander Wong et al (Aug 2020)
Over the past week, 42 new papers were published in "Computer Science - Robotics".
The paper discussed most in the news over the past week was "Super-Human Performance in Gran Turismo Sport Using Deep Reinforcement Learning" by Florian Fuchs et al (Aug 2020), which was referenced 6 times, including in the article A deep learning model achieves super-human performance at Gran Turismo Sport in Tech Xplore. The paper author, Yunlong Song (Researchers), was quoted saying "Autonomous driving at high speed is a challenging task that requires generating fast and precise actions even when the vehicle is approaching its physical limits". The paper got social media traction with 159 shares. The researchers consider the task of autonomous car racing in the top - selling car racing game Gran Turismo Sport. A Twitter user, @Underfox3, observed "Researchers have presented the first autonomous racing policy that achieves super-human performance in time trial settings in the Gran Turismo Sport. #DeepLearning".