Week Ending 2.14.2021
RESEARCH WATCH: 2.14.2021
This week was active for "Computer Science", with 1,215 new papers.
The paper discussed most in the news over the past week was by a team at Colorado State University: "Whos a Good Boy? Reinforcing Canine Behavior in Real-Time using Machine Learning" by Jason Stock et al (Jan 2021), which was referenced 122 times, including in the article PayPal’s Crypto Products Coming to the UK in Months in Yahoo! Finance. The paper author, Tom Cavey (Colorado State University), was quoted saying "We have developed an apparatus which uses machine learning to monitor and reward dogs’ positive behaviors". The paper got social media traction with 71 shares. The researchers outline the development methodology for an automatic dog treat dispenser which combines machine learning and embedded hardware to identify and reward dog behaviors in real - time. A user, @itsstock, tweeted "Henry, our test subject in this experiment, was a very good boy and is still getting treats to this day for his good behaviors 🙂", while @pauldub67 commented "So we've had #cats +#AI. What about #MachineLearning to train your #dog when you're out ? "Sit", "stand", "lie down" = 92% test accuracy !!".
Leading researcher Yoshua Bengio (Université de Montréal) came out with "Structured Sparsity Inducing Adaptive Optimizers for Deep Learning".
The paper shared the most on social media this week is by a team at DeepMind: "High-Performance Large-Scale Image Recognition Without Normalization" by Andrew Brock et al (Feb 2021) with 947 shares. The authors develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer - Free ResNets. @ZFPhalanx (phalanx) tweeted "adaptive gradient clipping stabilize the training in case of large learning rate or strong augmentation - NFNets: new normalize-free architectures - it match acc of efnetb7 while 8.7x faster to train - 89.2% top1 acc with large-scale pre-training".
This week was very active for "Computer Science - Artificial Intelligence", with 191 new papers.
The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "SpAtten: Efficient Sparse Attention Architecture with Cascade Token and Head Pruning" by Hanrui Wang et al (Dec 2020), which was referenced 10 times, including in the article A language learning system that pays attention – more efficiently than ever before in Mirage News. The paper author, Hanrui Wang (Massachusetts Institute of Technology), was quoted saying "Our vision for the future is that new algorithms and hardware that remove the redundancy in languages will reduce cost and save on the power budget for data center NLP workloads". The paper got social media traction with 19 shares. The authors present SpAtten, an efficient algorithm - architecture co - design that leverages token sparsity, head sparsity, and quantization opportunities to reduce the attention computation and memory access. On Twitter, @hanrui_w commented "NLP's Moore's Law: every year model size increases by 10x!😇 How to make them efficient? Check out our work on efficient NLP🙌 Hardware-Aware Transformer: NLP Attention accelerator: Prune/Quantize LMs".
The paper shared the most on social media this week is by a team at DeepMind: "Reverb: A Framework For Experience Replay" by Albin Cassirer et al (Feb 2021) with 81 shares. The authors introduce Reverb : an efficient, extensible, and easy to use system designed specifically for experience replay in RL. @DeepMind (DeepMind) tweeted "Reverb is an efficient, extensible and easy to use system designed specifically for experience replay in RL. In a new paper, our team presents the core design, examples of how it can be applied & empirical results of Reverb's performance characteristics".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 218 new papers.
The paper discussed most in the news over the past week was by a team at Colorado State University: "Whos a Good Boy? Reinforcing Canine Behavior in Real-Time using Machine Learning" by Jason Stock et al (Jan 2021)
Leading researcher Luc Van Gool (Computer Vision Laboratory) came out with "Efficient Conditional GAN Transfer with Knowledge Propagation across Classes".
The paper shared the most on social media this week is by a team at DeepMind: "High-Performance Large-Scale Image Recognition Without Normalization" by Andrew Brock et al (Feb 2021)
Over the past week, 28 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was by a team at Northeastern University: "One Label, One Billion Faces: Usage and Consistency of Racial Categories in Computer Vision" by Zaid Khan et al (Feb 2021), which was referenced 2 times, including in the article Researchers find that labels in computer vision datasets poorly capture racial diversity in Venturebeat. The paper got social media traction with 10 shares.
This week was active for "Computer Science - Human-Computer Interaction", with 29 new papers.
The paper discussed most in the news over the past week was by a team at Microsoft: "AffectiveSpotlight: Facilitating the Communication of Affective Responses from Audience Members during Online Presentations" by Prasanth Murali et al (Jan 2021), which was referenced 3 times, including in the article Microsoft Teams AI could tell you who is most enjoying your video call in New Scientist. The paper was shared 4 times in social media. On Twitter, @murali_pras posted "Really excited to share the media coverage of my internship project at Microsoft Research. We explored the development of a tool to facilitate audience feedback in online presentations. The paper appears at CHI this year and pre-print can be found at".
This week was extremely active for "Computer Science - Learning", with 518 new papers.
The paper discussed most in the news over the past week was "Detection and Prediction of Nutrient Deficiency Stress using Longitudinal Aerial Imagery" by Saba Dadsetan et al (Dec 2020), which was referenced 14 times, including in the article Intelinair Reports Record 2020 Revenue Growth Proving Value of Enhanced AGMRI Features in MarketScreener.com. The paper got social media traction with 19 shares.
Leading researcher Yoshua Bengio (Université de Montréal) published "Structured Sparsity Inducing Adaptive Optimizers for Deep Learning".
The paper shared the most on social media this week is by a team at DeepMind: "High-Performance Large-Scale Image Recognition Without Normalization" by Andrew Brock et al (Feb 2021) .
Over the past week, 13 new papers were published in "Computer Science - Multiagent Systems".
The paper shared the most on social media this week is "Automated and Distributed Statistical Analysis of Economic Agent-Based Models" by Andrea Vandin et al (Feb 2021) with 267 shares.
Over the past week, 32 new papers were published in "Computer Science - Neural and Evolutionary Computing".
This week was active for "Computer Science - Robotics", with 54 new papers.
The paper discussed most in the news over the past week was "The Six Hug Commandments: Design and Evaluation of a Human-Sized Hugging Robot with Visual and Haptic Perception" by Alexis E. Block et al (Jan 2021), which was referenced 2 times, including in the article HuggieBot 2.0: A soft and human-size robot that hugs users on request in Tech Xplore. The paper author, Alexis E. Block, was quoted saying "In addition to showcasing hardware and software improvements, our new paper about HuggieBot 3.0 centers on enabling the robot to detect, classify and respond to intra-hug gestures like rubs, pats and squeezes". The paper was shared 1 time in social media.