Eye On AI

View Original

Week Ending 2.14.2021

RESEARCH WATCH: 2.14.2021

This week was active for "Computer Science", with 1,215 new papers.

  • The paper discussed most in the news over the past week was by a team at Colorado State University: "Whos a Good Boy? Reinforcing Canine Behavior in Real-Time using Machine Learning" by Jason Stock et al (Jan 2021), which was referenced 122 times, including in the article PayPal’s Crypto Products Coming to the UK in Months in Yahoo! Finance. The paper author, Tom Cavey (Colorado State University), was quoted saying "We have developed an apparatus which uses machine learning to monitor and reward dogs’ positive behaviors". The paper got social media traction with 71 shares. The researchers outline the development methodology for an automatic dog treat dispenser which combines machine learning and embedded hardware to identify and reward dog behaviors in real - time. A user, @itsstock, tweeted "Henry, our test subject in this experiment, was a very good boy and is still getting treats to this day for his good behaviors 🙂", while @pauldub67 commented "So we've had #cats +#AI. What about #MachineLearning to train your #dog when you're out ? "Sit", "stand", "lie down" = 92% test accuracy !!".

  • Leading researcher Yoshua Bengio (Université de Montréal) came out with "Structured Sparsity Inducing Adaptive Optimizers for Deep Learning".

  • The paper shared the most on social media this week is by a team at DeepMind: "High-Performance Large-Scale Image Recognition Without Normalization" by Andrew Brock et al (Feb 2021) with 947 shares. The authors develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer - Free ResNets. @ZFPhalanx (phalanx) tweeted "adaptive gradient clipping stabilize the training in case of large learning rate or strong augmentation - NFNets: new normalize-free architectures - it match acc of efnetb7 while 8.7x faster to train - 89.2% top1 acc with large-scale pre-training".

This week was very active for "Computer Science - Artificial Intelligence", with 191 new papers.

  • The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning" by Hanrui Wang et al (Dec 2020), which was referenced 10 times, including in the article A language learning system that pays attention – more efficiently than ever before in Mirage News. The paper author, Hanrui Wang (Massachusetts Institute of Technology), was quoted saying "Our vision for the future is that new algorithms and hardware that remove the redundancy in languages will reduce cost and save on the power budget for data center NLP workloads". The paper got social media traction with 19 shares. The authors present SpAtten, an efficient algorithm - architecture co - design that leverages token sparsity, head sparsity, and quantization opportunities to reduce the attention computation and memory access. On Twitter, @hanrui_w commented "NLP's Moore's Law: every year model size increases by 10x!😇 How to make them efficient? Check out our work on efficient NLP🙌 Hardware-Aware Transformer: NLP Attention accelerator: Prune/Quantize LMs".

  • The paper shared the most on social media this week is by a team at DeepMind: "Reverb: A Framework For Experience Replay" by Albin Cassirer et al (Feb 2021) with 81 shares. The authors introduce Reverb : an efficient, extensible, and easy to use system designed specifically for experience replay in RL. @DeepMind (DeepMind) tweeted "Reverb is an efficient, extensible and easy to use system designed specifically for experience replay in RL. In a new paper, our team presents the core design, examples of how it can be applied & empirical results of Reverb's performance characteristics".

This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 218 new papers.

Over the past week, 28 new papers were published in "Computer Science - Computers and Society".

This week was active for "Computer Science - Human-Computer Interaction", with 29 new papers.

This week was extremely active for "Computer Science - Learning", with 518 new papers.

Over the past week, 13 new papers were published in "Computer Science - Multiagent Systems".

Over the past week, 32 new papers were published in "Computer Science - Neural and Evolutionary Computing".

This week was active for "Computer Science - Robotics", with 54 new papers.


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.