Week Ending 9.13.2020
RESEARCH WATCH: 9.13.2020
This week was very active for "Computer Science - Artificial Intelligence", with 207 new papers.
The paper discussed most in the news over the past week was by a team at DeepMind: "Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess" by Nenad Tomašev et al (Sep 2020), which was referenced 8 times, including in the article AI Ruined Chess. Now, It's Making the Game Beautiful Again in Wired News. The paper author, Vladimir Kramnik, was quoted saying "For quite a number of games on the highest level, half of the game—sometimes a full game—is played out of memory. You don't even play your own preparation; you play your computer's preparation." The paper got social media traction with 97 shares. The researchers use AlphaZero to creatively explore and design new chess variants. A Twitter user, @debarghya_das, commented "1/5 This chess paper from DeepMind and has absolutely consumed my mind in the last few days. They answered a question many chess players have dreamed of - how fair is chess? If you change the rules, does that change?".
Leading researcher Dhruv Batra (Georgia Institute of Technology) published "Integrating Egocentric Localization for More Realistic Point-Goal Navigation Agents".
The paper shared the most on social media this week is "Generative Language Modeling for Automated Theorem Proving" by Stanislas Polu et al (Sep 2020) with 777 shares. @NPCollapse (Connor Leahy) tweeted "Compute is eating AI. trains a state of the art theorem prover using a GPT architecture. I was waiting for this to happen, neat! Though for real...wtf is this proof? Maybe the gains were just that you need an AI to understand this language haha!".
The most influential Twitter user discussing papers is (((E. Glen Weyl))) who shared "Who Watches the Watchmen? A Review of Subjective Approaches for Sybil-resistance in Proof of Personhood Protocols" by Divya Siddarth et al (Jul 2020) and said: "Great new paper on identity protocols from some of my favorite people".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 254 new papers.
The paper discussed most in the news over the past week was "A Lip Sync Expert Is All You Need for Speech to Lip Generation In The Wild" by K R Prajwal et al (Aug 2020), which was referenced 8 times, including in the article I learned to make a lip-syncing deepfake in just a few hours (and you can, too) in TodayHeadline. The paper got social media traction with 116 shares. The investigators investigate the problem of lip - syncing a talking face video of an arbitrary identity to match a target speech segment. On Twitter, @daily_sergey said "👄 Wav2Lip: Accurately Lip-syncing Videos In The Wild Lip-sync videos to any target speech with high accuracy. Try our interactive demo. Github: Paper: Interactive Demo: Colab".
Leading researcher Dhruv Batra (Georgia Institute of Technology)
The paper shared the most on social media this week is "Adversarial score matching and improved sampling for image generation" by Alexia Jolicoeur-Martineau et al (Sep 2020) with 89 shares. @AnimeshKarnewar (Animesh Karnewar) tweeted "And! Its finally here".
This week was very active for "Computer Science - Computers and Society", with 61 new papers.
The paper discussed most in the news over the past week was by a team at University of Waterloo: "Social Companion Robots to Reduce Isolation: A Perception Change Due to COVID-19" by Moojan Ghafurian et al (Aug 2020), which was referenced 3 times, including in the article More people interested in the idea of robot companions thanks to pandemic isolation in KitchenerToday.com. The paper author, Moojan Ghafurian (University of Waterloo), was quoted saying "This change in perception is likely because COVID-19 has caused people to pay more attention to the consequences of being socially isolated". The paper got social media traction with 5 shares. On Twitter, @chrishendel posted "Apparently our 🤖 overloads are a theme today. Robot chicken butchers, brought to you by #Covid19 via #Robots".
The paper shared the most on social media this week is "Measuring Massive Multitask Language Understanding" by Dan Hendrycks et al (Sep 2020) with 217 shares. @beduffy1 (Ben Duffy) tweeted "You just can't deny progress within NLP after reading the highlighted part. OR the benchmarks just aren't good enough? Bets on when superhuman performance on this new dataset? 😅🙃. This one seems much more difficult but I give it 3 years to be safe".
The most influential Twitter user discussing papers is (((E. Glen Weyl))) who shared "Who Watches the Watchmen? A Review of Subjective Approaches for Sybil-resistance in Proof of Personhood Protocols" by Divya Siddarth et al (Jul 2020)
This week was extremely active for "Computer Science - Human-Computer Interaction", with 61 new papers.
The paper discussed most in the news over the past week was by a team at Ritsumeikan University: "Excavating Excavating AI: The Elephant in the Gallery" by Michael J. Lyons (Sep 2020), which was referenced 3 times, including in the article The U.S. May Soon Scan New Immigrants’ Faces, Iris, Voices, and DNA in Medium.com. The paper also got the most social media traction with 227 shares.
This week was very active for "Computer Science - Learning", with 439 new papers.
The paper discussed most in the news over the past week was "TinySpeech: Attention Condensers for Deep Speech Recognition Neural Networks on Edge Devices" by Alexander Wong et al (Aug 2020), which was referenced 20 times, including in the article TinySpeech: Novel Attention Condensers Enable Deep Recognition Networks on Edge Devices in SyncedReview.com. The paper author, Alexander Wong (University of Waterloo), was quoted saying "The key takeaways from this research is that not only can self-attention be leveraged to significantly improve the accuracy of deep neural networks, it can also have great ramifications for greatly improving efficiency and robustness of deep neural networks". The paper got social media traction with 12 shares. The authors introduce the concept of attention condensers for building low - footprint, highly - efficient deep neural networks for on - device speech recognition on the edge. On Twitter, @sheldonfff observed "Important theoretical development from our team allowing AI to employ human-like shortcuts in the interest of efficiency. "I see only one move ahead, but it is always the correct one." – Jose Capablanca, World Chess Champion 1921-27 #darwinai".
Leading researcher Dhruv Batra (Georgia Institute of Technology)
The paper shared the most on social media this week is "Generative Language Modeling for Automated Theorem Proving" by Stanislas Polu et al (Sep 2020)
The most influential Twitter user discussing papers is (((E. Glen Weyl))) who shared "Who Watches the Watchmen? A Review of Subjective Approaches for Sybil-resistance in Proof of Personhood Protocols" by Divya Siddarth et al (Jul 2020)
This week was active for "Computer Science - Multiagent Systems", with 26 new papers.
Over the past week, 22 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was "TinySpeech: Attention Condensers for Deep Speech Recognition Neural Networks on Edge Devices" by Alexander Wong et al (Aug 2020)
The paper shared the most on social media this week is by a team at Massachusetts Institute of Technology: "Understanding the Role of Individual Units in a Deep Neural Network" by David Bau et al (Sep 2020) with 85 shares. The researchers present network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks. @DynamicWebPaige (👩💻 DynamicWebPaige @ 127.0.0.1 🏠) tweeted ""Can the individual hidden units of a deep network teach us how the network solves a complex task? Single units often match *human-interpretable concepts that were not explicitly taught*: objects, parts, textures, tense, gender, tense, and more." 🔍".
This week was active for "Computer Science - Robotics", with 60 new papers.
The paper discussed most in the news over the past week was "OpenBot: Turning Smartphones into Robots" by Matthias Müller et al (Aug 2020), which was referenced 6 times, including in the article Intel researchers develop $50 3D printed “Openbot” to advance the accessibility of robotics in 3D Printing Industry. The paper author, Matthias Muller, was quoted saying "Robots are expensive. Legged robots and industrial manipulators cost as much as luxury cars, and the cheapest robots from Franka Emika or Clearpath cost at least $10K." The paper got social media traction with 8 shares. On Twitter, @EdgeImpulse observed "These researchers designed a $50 robotic platform that leverages your smartphone for sensing and computation".