Week Ending 06.02.19

 

RESEARCH WATCH: 06.02.19

 
ai-research.png

This week was active for "Computer Science", with 1,310 new papers.

  • The paper discussed most in the news over the past week was by a team at Samsung: "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models" by Egor Zakharov et al (May 2019), which was referenced 174 times, including in the article Deepfakes Are Getting Better. But They're Still Easy to Spot in Wired News. The paper author, Egor Zakharov (Samsung), was quoted saying "Effectively, the learned model serves as a realistic avatar of a person". The paper also got the most social media traction with 60936 shares. A user, @catovitch, tweeted "I wonder if this can/will be used with video compression. If it can be done in real time, the decoder could just be told "build a face our of this frame, then rotate it to these 3D positions for the next 5 frames"".

  • Leading researcher Yoshua Bengio (Université de Montréal) published "Attention Based Pruning for Shift Networks".

  • The paper shared the most on social media this week is by a team at Google: "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks" by Mingxing Tan et al (May 2019) with 1044 shares. The researchers study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. @yogthos (Dmitri Sotnikov ⚛) tweeted "EfficientNet-B7 achieves state-of-the-art 84.4% top-1 / 97.1% top-5 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. source code".

This week was very active for "Computer Science - Artificial Intelligence", with 161 new papers.

This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 259 new papers.

  • The paper discussed most in the news over the past week was by a team at Samsung: "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models" by Egor Zakharov et al (May 2019), which was referenced 174 times, including in the article Deepfakes Are Getting Better. But They're Still Easy to Spot in Wired News. The paper author, Egor Zakharov (Samsung), was quoted saying "Effectively, the learned model serves as a realistic avatar of a person". The paper also got the most social media traction with 60944 shares. On Twitter, @catovitch said "I wonder if this can/will be used with video compression. If it can be done in real time, the decoder could just be told "build a face our of this frame, then rotate it to these 3D positions for the next 5 frames"".

  • Leading researcher Aaron Courville (Université de Montréal) published "Batch weight for domain adaptation with mass shift" @gastronomy tweeted "> Unsupervised domain transfer is the task of transferring or translating samples from a source distribution to a different target distribution. Current solutions".

  • The paper shared the most on social media this week is by a team at Google: "EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks" by Mingxing Tan et al (May 2019) with 1047 shares. The researchers study model scaling and identify that carefully balancing network depth, width, and resolution can lead to better performance. @yogthos (Dmitri Sotnikov ⚛) tweeted "EfficientNet-B7 achieves state-of-the-art 84.4% top-1 / 97.1% top-5 accuracy on ImageNet, while being 8.4x smaller and 6.1x faster on inference than the best existing ConvNet. source code".

Over the past week, 22 new papers were published in "Computer Science - Computers and Society".

  • The paper discussed most in the news over the past week was by a team at University of Washington: "Defending Against Neural Fake News" by Rowan Zellers et al (May 2019), which was referenced 8 times, including in the article Yes! On to Our Story! GROVER AI Detects, Writes Fake News Better Than Humans in Sputnik. The paper author, Rowan Zellers (University of Washington), was quoted saying "Our work does suggest that there is an arms race between the adversary and the verifier". The paper got social media traction with 210 shares. A Twitter user, @r_macdonald, said "Dang, Univ of Washington ML/AI researchers (diff teams) are demonstrating dark sides of machine intelligence creating #disinformation media. First in 2017 with Lip-syncing Obama now, convincing text news stories".

This week was active for "Computer Science - Human-Computer Interaction", with 32 new papers.

This week was extremely active for "Computer Science - Learning", with 698 new papers.

This week was active for "Computer Science - Multiagent Systems", with 20 new papers.

This week was active for "Computer Science - Neural and Evolutionary Computing", with 44 new papers.

This week was active for "Computer Science - Robotics", with 52 new papers.


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.