Week Ending 06.02.19
RESEARCH WATCH: 06.02.19
This week was active for "Computer Science", with 1,310 new papers.
The paper discussed most in the news over the past week was by a team at Samsung: "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models" by Egor Zakharov et al (May 2019), which was referenced 174 times, including in the article Deepfakes Are Getting Better. But They're Still Easy to Spot in Wired News. The paper author, Egor Zakharov (Samsung), was quoted saying "Effectively, the learned model serves as a realistic avatar of a person". The paper also got the most social media traction with 60936 shares. A user, @catovitch, tweeted "I wonder if this can/will be used with video compression. If it can be done in real time, the decoder could just be told "build a face our of this frame, then rotate it to these 3D positions for the next 5 frames"".
Leading researcher Yoshua Bengio (Université de Montréal) published "Attention Based Pruning for Shift Networks".
The paper shared the most on social media this week is by a team at Google: "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks" by Mingxing Tan et al (May 2019) with 1044 shares. The researchers study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. @yogthos (Dmitri Sotnikov ⚛) tweeted "EfficientNet-B7 achieves state-of-the-art 84.4% top-1 / 97.1% top-5 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. source code".
This week was very active for "Computer Science - Artificial Intelligence", with 161 new papers.
The paper discussed most in the news over the past week was "ARCHANGEL: Tamper-proofing Video Archives using Temporal Content Hashes on the Blockchain" by Tu Buiet al (Apr 2019), which was referenced 18 times, including in the article Blockchain Project for National Archives Reports Successful Trial for Audio-Visual Content in Yahoo! News. The paper author, John Sheridan, was quoted saying "Exploring blockchain technology together with some of the world's leading archives, the ARCHANGEL project has shown, for real, how archives might combine forces to protect and assure vital digital evidence for the future. ARCHANGEL has been an outstanding partnership that has delivered ground breaking research into the practicalities of using blockchain to assure trust in large scale digital archives." The paper got social media traction with 20 shares. A Twitter user, @JCollomosse, said "ARCHANGEL fuses Blockchain and AI to help secure the integrity of National Archives around the world Check our talk at the CVPR Blockchain workshop (June 17). Project page".
Leading researcher Yoshua Bengio (Université de Montréal) came out with "Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics".
The paper shared the most on social media this week is by a team at Carnegie Mellon University: "SATNet: Bridging deep learning and logical reasoning using a differentiable satisfiability solver" by Po-Wei Wang et al (May 2019) with 168 shares. @antonioalegria (Antonio Alegria) tweeted "This is great! We’re also working with SAT solvers and DeepLearning and there’s a lot of opportunity".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 259 new papers.
The paper discussed most in the news over the past week was by a team at Samsung: "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models" by Egor Zakharov et al (May 2019), which was referenced 174 times, including in the article Deepfakes Are Getting Better. But They're Still Easy to Spot in Wired News. The paper author, Egor Zakharov (Samsung), was quoted saying "Effectively, the learned model serves as a realistic avatar of a person". The paper also got the most social media traction with 60944 shares. On Twitter, @catovitch said "I wonder if this can/will be used with video compression. If it can be done in real time, the decoder could just be told "build a face our of this frame, then rotate it to these 3D positions for the next 5 frames"".
Leading researcher Aaron Courville (Université de Montréal) published "Batch weight for domain adaptation with mass shift" @gastronomy tweeted "> Unsupervised domain transfer is the task of transferring or translating samples from a source distribution to a different target distribution. Current solutions".
The paper shared the most on social media this week is by a team at Google: "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks" by Mingxing Tan et al (May 2019) with 1047 shares. The researchers study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. @yogthos (Dmitri Sotnikov ⚛) tweeted "EfficientNet-B7 achieves state-of-the-art 84.4% top-1 / 97.1% top-5 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. source code".
Over the past week, 22 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was by a team at University of Washington: "Defending Against Neural Fake News" by Rowan Zellers et al (May 2019), which was referenced 8 times, including in the article Yes! On to Our Story! GROVER AI Detects, Writes Fake News Better Than Humans in Sputnik. The paper author, Rowan Zellers (University of Washington), was quoted saying "Our work does suggest that there is an arms race between the adversary and the verifier". The paper got social media traction with 210 shares. A Twitter user, @r_macdonald, said "Dang, Univ of Washington ML/AI researchers (diff teams) are demonstrating dark sides of machine intelligence creating #disinformation media. First in 2017 with Lip-syncing Obama now, convincing text news stories".
This week was active for "Computer Science - Human-Computer Interaction", with 32 new papers.
This week was extremely active for "Computer Science - Learning", with 698 new papers.
The paper discussed most in the news over the past week was by a team at Samsung: "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models" by Egor Zakharov et al (May 2019), which was referenced 174 times, including in the article Deepfakes Are Getting Better. But They're Still Easy to Spot in Wired News. The paper author, Egor Zakharov (Samsung), was quoted saying "Effectively, the learned model serves as a realistic avatar of a person". The paper also got the most social media traction with 60942 shares. A user, @catovitch, tweeted "I wonder if this can/will be used with video compression. If it can be done in real time, the decoder could just be told "build a face our of this frame, then rotate it to these 3D positions for the next 5 frames"".
Leading researcher Yoshua Bengio (Université de Montréal) came out with "Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics".
The paper shared the most on social media this week is by a team at Google: "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks" by Mingxing Tan et al (May 2019) with 1047 shares. The investigators study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. @yogthos (Dmitri Sotnikov ⚛) tweeted "EfficientNet-B7 achieves state-of-the-art 84.4% top-1 / 97.1% top-5 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. source code".
This week was active for "Computer Science - Multiagent Systems", with 20 new papers.
This week was active for "Computer Science - Neural and Evolutionary Computing", with 44 new papers.
Leading researcher Yoshua Bengio (Université de Montréal) came out with "Attention Based Pruning for Shift Networks".
The paper shared the most on social media this week is by a team at Google: "Are Disentangled Representations Helpful for Abstract Visual Reasoning?" by Sjoerd van Steenkiste et al (May 2019) with 66 shares. The authors conduct a large - scale study that investigates whether disentangled representations are more suitable for abstract reasoning tasks.
This week was active for "Computer Science - Robotics", with 52 new papers.
The paper discussed most in the news over the past week was "Stanford Doggo: An Open-Source, Quasi-Direct-Drive Quadruped" by Nathan Kau et al (May 2019), which was referenced 35 times, including in the article Stanford Doggo: a highly agile quadruped robot in PhysOrg.com. The paper author, Nathan Kau, was quoted saying "We had seen these other quadruped robots used in research, but they weren’t something that you could bring into your own lab and use for your own projects". The paper got social media traction with 17 shares. The authors present Stanford Doggo, a quasi - direct - drive quadruped capable of dynamic locomotion.
Leading researcher Sergey Levine (University of California, Berkeley) came out with "Extending Deep Model Predictive Control with Safety Augmented Value Estimation from Demonstrations".