Week Ending 3.14.2021
RESEARCH WATCH: 3.14.2021
This week was active for "Computer Science", with 1,212 new papers.
The paper discussed most in the news over the past week was by a team at University of Oxford: "QNLP in Practice: Running Compositional Models of Meaning on a Quantum Computer" by Robin Lorenz et al (Feb 2021), which was referenced 61 times, including in the article Real-time Analytics News for Week Ending March 6 in RT Insights. The paper got social media traction with 38 shares. The authors present results on the first NLP experiments conducted on Noisy Intermediate - Scale Quantum (NISQ) computers for datasets of size >= 100 sentences. A Twitter user, @Moor_Quantum, posted "Very nice work by team that advances QNLP and provides significant POC that it can be done on quantum. Covers tech problems and training issues of running a NPL model with >100 sentences on NISQ QC".
Leading researcher Ruslan Salakhutdinov (Carnegie Mellon University) published "Instabilities of Offline RL with Pre-Trained Neural Representation".
The paper shared the most on social media this week is "CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review" by Dan Hendrycks et al (Mar 2021) with 344 shares. @hardmaru (hardmaru) tweeted "CUAD: A dataset with over 13,000 annotations for hundreds of legal contracts that have been manually labelled by legal experts, to serve as a benchmark for contract understanding. Discussion: GitHub: Paper".
The most influential Twitter user discussing papers is Judea Pearl who shared "Directions for Explainable Knowledge-Enabled Systems" by Shruthi Chari et al (Mar 2020) and said: "Glad to finally see a paper on "explainability" that explains why explainability cannot be achieved through data-centric thinking, but requires world knowledge about what is to be explained".
This week was very active for "Computer Science - Artificial Intelligence", with 169 new papers.
The paper discussed most in the news over the past week was by a team at University of Oxford: "QNLP in Practice: Running Compositional Models of Meaning on a Quantum Computer" by Robin Lorenz et al (Feb 2021)
Leading researcher Ruslan Salakhutdinov (Carnegie Mellon University) published "Instabilities of Offline RL with Pre-Trained Neural Representation".
The paper shared the most on social media this week is by a team at UC Berkeley: "Pretrained Transformers as Universal Computation Engines" by Kevin Lu et al (Mar 2021) with 243 shares. @BMarcusMcCann (Bryan McCann) tweeted "Recalling: Bytes in, bytes out. A single multi-modal, multi-task, multi-everything model. One that throws off the shackles of our distinctions between domains, modalities, tasks, and all the rest. Just train a general pattern recognition system. Getting closer now".
The most influential Twitter user discussing papers is Thread Reader App who shared "Cool baryon and quark matter in holographic QCD" by Takaaki Ishii et al (Mar 2019) and said: "Bonjour, the unroll you asked for: Yesterday in more on "improved holographic... See you soon. 🤖". Note that this paper was published about two years ago.
This week was very active for "Computer Science - Computer Vision and Pattern Recognition", with 306 new papers.
The paper discussed most in the news over the past week was by a team at DeepMind: "High-Performance Large-Scale Image Recognition Without Normalization" by Andrew Brock et al (Feb 2021), which was referenced 10 times, including in the article NF-Nets : Normalizer Free Nets in Medium.com. The paper got social media traction with 1086 shares. The researchers develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer - Free ResNets. A user, @sohamde_, tweeted "Releasing NFNets: SOTA on ImageNet. Without normalization layers! Code: This is the third paper in a series that began by studying the benefits of BatchNorm and ended by designing highly performant networks w/o it. A thread: 1/8".
Leading researcher Dhruv Batra (Georgia Institute of Technology) published "Large Batch Simulation for Deep Reinforcement Learning".
The paper shared the most on social media this week is "Deep Generative Modelling: A Comparative Review of VAEs, GANs, Normalizing Flows, Energy-Based and Autoregressive Models" by Sam Bond-Taylor et al (Mar 2021) with 128 shares.
The most influential Twitter user discussing papers is Judea Pearl who shared "Directions for Explainable Knowledge-Enabled Systems" by Shruthi Chari et al (Mar 2020)
This week was active for "Computer Science - Computers and Society", with 32 new papers.
The paper discussed most in the news over the past week was by a team at Carnegie Mellon University: "Gender Bias, Social Bias and Representation: 70 Years of BHHollywood" by Kunal Khadilkar et al (Feb 2021), which was referenced 11 times, including in the article Bollywood ‘biases’ presented by Carnegie Mellon’s AI tool: Upper caste Hindu doctors & no NE in ThePrint. The paper author, Ashiqur R. KhudaBukhsh (Carnegie Mellon University), was quoted saying "All of these things we kind of knew, but now we have numbers to quantify them". The paper got social media traction with 12 shares. A Twitter user, @KunalKhadilkar, said "Hi Alison, I am so glad you found our research interesting. I would love to talk more about how we used linguistic techniques for uncovering social biases. The full paper is available at".
This week was very active for "Computer Science - Human-Computer Interaction", with 39 new papers.
The paper discussed most in the news over the past week was "Reducing cybersickness in 360-degree virtual reality" by Iqra Arshad et al (Mar 2021), which was referenced 1 time, including in the article Solving Cybersickness with AI in Towards Data Science. The paper was shared 3 times in social media. The authors investigated cybersickness in 360-degree VR.
This week was very active for "Computer Science - Learning", with 380 new papers.
The paper discussed most in the news over the past week was by a team at University of Oxford: "QNLP in Practice: Running Compositional Models of Meaning on a Quantum Computer" by Robin Lorenz et al (Feb 2021)
Leading researcher Pieter Abbeel (University of California, Berkeley) came out with "Pretrained Transformers as Universal Computation Engines", which had 22 shares over the past 4 days. @BMarcusMcCann tweeted "Recalling: Bytes in, bytes out. A single multi-modal, multi-task, multi-everything model. One that throws off the shackles of our distinctions between domains, modalities, tasks, and all the rest. Just train a general pattern recognition system. Getting closer now".
The paper shared the most on social media this week is "CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review" by Dan Hendrycks et al (Mar 2021)
The most influential Twitter user discussing papers is Thread Reader App who shared "Cool baryon and quark matter in holographic QCD" by Takaaki Ishii et al (Mar 2019)
Over the past week, 16 new papers were published in "Computer Science - Multiagent Systems".
Over the past week, 25 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was "Sparsity in Deep Learning: Pruning and growth for efficient inference and training in neural networks" by Torsten Hoefler et al (Jan 2021), which was referenced 1 time, including in the article Accelerating Neural Networks on Mobile and Web with Sparse Inference in Google AI Blog. The paper got social media traction with 223 shares. The investigators survey prior work on sparsity in deep learning and provide an extensive tutorial of sparsification for both inference and training. A user, @thoefler, tweeted "The future of #DeepLearning is sparse! See our overview of the field and upcoming opportunities for how to gain 10-100x performance to fuel the next revolution. #HPC techniques will be key as large-scale training is #supercomputing. #MachineLearning".
This week was extremely active for "Computer Science - Robotics", with 132 new papers.
The paper discussed most in the news over the past week was by a team at The National Centre of Competence in Research Robotics (NCCR Robotics): "A Unified MPC Framework for Whole-Body Dynamic Locomotion and Manipulation" by Jean-Pierre Sleiman et al (Mar 2021), which was referenced 1 time, including in the article Video Friday: A Walking, Wheeling Quadruped in Spectrum Online. The paper was shared 1 time in social media. The investigators propose a whole - body planning framework that unifies dynamic locomotion and manipulation tasks by formulating a single multi - contact optimal control problem.
Leading researcher Sergey Levine (University of California, Berkeley) came out with "Maximum Entropy RL (Provably) Solves Some Robust RL Problems" The investigators prove theoretically that standard maximum entropy RL is robust to some disturbances in the dynamics and the reward function.
The paper shared the most on social media this week is by a team at Yale University: "Scale invariant robot behavior with fractals" by Sam Kriegman et al (Mar 2021) with 58 shares. @ampanmdagaba (Arseny Khakhalin) tweeted "That's the nerdiest thing I've seen in a while! And so pretty! (Fractal robots, huh!? Fractal - made of small parts that act locally - robots!!!)".
The most influential Twitter user discussing papers is Judea Pearl who shared "Directions for Explainable Knowledge-Enabled Systems" by Shruthi Chari et al (Mar 2020)