Week Ending 4.3.2022
RESEARCH WATCH: 4.3.2022
SPONSORED BY
This week was active for "Computer Science", with 1,495 new papers.
The paper discussed most in the news over the past week was "You Cannot Always Win the Race: Analyzing the LFENCE/JMP Mitigation for Branch Target Injection" by Alyssa Milburn (Intel) et al (Mar 2022), which was referenced 19 times, including in the article Chips & Salsa Episode 13: Intel STORM team in Intel. The paper got social media traction with 8 shares.
Leading researcher Oriol Vinyals (DeepMind) published "Training Compute-Optimal Large Language Models", which had 29 shares over the past 4 days. @arankomatsuzaki tweeted "Training Compute-Optimal Large Language Models Trains Chinchilla, which is Gopher w/ the same compute budget but with 70B parameters and 4x more more data. It significantly outperforms Gopher, e.g. by >7% on MMLU". This paper was also shared the most on social media with 677 tweets. @arankomatsuzaki (Aran Komatsuzaki) tweeted "Training Compute-Optimal Large Language Models Trains Chinchilla, which is Gopher w/ the same compute budget but with 70B parameters and 4x more more data. It significantly outperforms Gopher, e.g. by >7% on MMLU".
This week was very active for "Computer Science - Artificial Intelligence", with 215 new papers.
The paper discussed most in the news over the past week was by a team at Google: "Memorizing Transformers" by Yuhuai Wu et al (Mar 2022), which was referenced 3 times, including in the article Can a language model acquire new knowledge by simply reading new data? in Analytics India Magazine. The paper got social media traction with 61 shares. The authors extend language models with the ability to memorize the internal representations of past inputs. A user, @PaperTldr, tweeted "🗜72% In this work, we extend language models with internal representations of past inputs to improve their ability to lookup into past non - differentiable memory of recent benchmarks and tasks, including generic web, math papers, books, well as a".
Leading researcher Pieter Abbeel (UC Berkeley) published "Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling and Design" The authors present a supervised pretraining approach to learn circuit representations that can be adapted to new circuit topologies or unseen prediction tasks. @CyrusHakha tweeted "Can we pre-train Graph Neural Networks for few-shot circuit modeling and design? We show that by pre-training deep GNNs to predict bias of circuits we can learn models that can be reused to do sample efficient circuit optimization and modeling. 📖 1/N".
The paper shared the most on social media this week is by a team at Tel Aviv University: "Transformer Language Models without Positional Encodings Still Learn Positional Information" by Adi Haviv et al (Mar 2022) with 185 shares. @Seb_Bratieres (Sébastien Bratières) tweeted "Received wisdom in DL: Transformers need positional encoding to make up for their intrinsically permutation-invariant architecture🤷. Or maybe they don't?🤔".
The most influential Twitter user discussing papers is Mark Riedl who shared "STaR: Bootstrapping Reasoning With Reasoning" by Eric Zelikman et al (Mar 2022) and said: "Is it possible to be both unsurprised that this works and also amazed that this works? Use prompting to generate “chains of thought”, then train on those chains to improve language model performance".
This week was very active for "Computer Science - Computer Vision and Pattern Recognition", with 497 new papers.
The paper discussed most in the news over the past week was by a team at The University of Tokyo: "Robot peels banana with goal-conditioned dual-action deep imitation learning" by Heecheol Kim et al (Mar 2022), which was referenced 12 times, including in the article Watch Robot Peel A Banana Without Squishing It, Showing Off Improved Dexterity in Indiatimes. The paper got social media traction with 11 shares. The authors present a goal - conditioned dual - action deep imitation learning (DIL) which can learn dexterous manipulation skills using human demonstration data. On Twitter, @summarizedml observed "This paper presents a goal-conditioned dual-action deep imitation learning method for dexterous robot manipulation tasks. 📄", while @IFLScience said "You can read more about it in the pre-print paper posted on arXiv".
Leading researcher Aaron Courville (Université de Montréal) published "Simplicial Embeddings in Self-Supervised Learning and Downstream Classification".
The paper shared the most on social media this week is "Exploring Plain Vision Transformer Backbones for Object Detection" by Yanghao Li et al (Mar 2022) with 136 shares. @DocXavi (Xavier Giró🎗) tweeted "Today I taught a lecture about the Vusion Transformer at for the first time. I guess my slides are already outdated, new version coming next Tuesday. #round2 #catchingup".
The most influential Twitter user discussing papers is Frank Wilczek who shared "Axion Dark Matter" by J. Jaeckel et al (Mar 2022) and said: "Big Snowmass study of axion search experiments: Getting very popular!".
Over the past week, 18 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was "Beach to Bitch: Inadvertent Unsafe Transcription of Kids Content on YouTube" by Krithika Ramesh et al (Feb 2022), which was referenced 1 time, including in the article How transcription morphs words into adult language in Deccan Herald. The paper got social media traction with 5 shares. The investigators present a novel (and troubling) finding that well - known automatic speech recognition (ASR) systems may produce text content highly inappropriate for kids while transcribing YouTube Kids videos.
The paper shared the most on social media this week is "An Illustrative Industry Architecture to Mitigate Potential Fragmentation across Central Bank Digital Currency and Commercial Bank Money" by Lee Braine et al (Mar 2022) with 82 shares. @StevieRipple (StevieRipple ⓧ) tweeted "Love how this report from Barclays Bank in the UK contains so many of the exact words that have been using for the last few years".
The most influential Twitter user discussing papers is Frank Wilczek who shared "Exploration of Wire Array Metamaterials for the Plasma Axion Haloscope" by M. Wooten et al (Mar 2022) and said: "Plasma haloscope prototype testing, with beautiful results: . On to ALPHA!".
This week was very active for "Computer Science - Human-Computer Interaction", with 40 new papers.
The paper discussed most in the news over the past week was "Behaviorally Grounded Model-Based and Model Free Cost Reduction in a Simulated Multi-Echelon Supply Chain" by James Paine (Feb 2022), which was referenced 1 time, including in the article How Artificial Intelligence can be used in Supply Chain Management in Medium.com. The paper got social media traction with 6 shares.
This week was extremely active for "Computer Science - Learning", with 465 new papers.
The paper discussed most in the news over the past week was by a team at Google: "Pathways: Asynchronous Distributed Dataflow for ML" by Paul Barham et al (Mar 2022), which was referenced 3 times, including in the article How Google hopes to build more efficient, multi-capability AI systems in TheRegister.com. The paper got social media traction with 185 shares. A Twitter user, @kaushik_bokka, posted "Uff. Weekend Read! Seems to have a lot of gems and interesting takeaways in there", while @_arohan_ observed "Really cool work on scaling beyond what one might think is possible, with specific focus on MPMD (IIUC). Lot of model architecture ideas waiting to be unlocked".
Leading researcher Oriol Vinyals (DeepMind) came out with "Training Compute-Optimal Large Language Models", which had 29 shares over the past 4 days. @arankomatsuzaki tweeted "Training Compute-Optimal Large Language Models Trains Chinchilla, which is Gopher w/ the same compute budget but with 70B parameters and 4x more more data. It significantly outperforms Gopher, e.g. by >7% on MMLU". This paper was also shared the most on social media with 677 tweets. @arankomatsuzaki (Aran Komatsuzaki) tweeted "Training Compute-Optimal Large Language Models Trains Chinchilla, which is Gopher w/ the same compute budget but with 70B parameters and 4x more more data. It significantly outperforms Gopher, e.g. by >7% on MMLU".
Over the past week, 14 new papers were published in "Computer Science - Multiagent Systems".
Over the past week, 20 new papers were published in "Computer Science - Neural and Evolutionary Computing".
This week was very active for "Computer Science - Robotics", with 93 new papers.
The paper discussed most in the news over the past week was by a team at The University of Tokyo: "Robot peels banana with goal-conditioned dual-action deep imitation learning" by Heecheol Kim et al (Mar 2022)
Leading researcher Pieter Abbeel (UC Berkeley) came out with "Adversarial Motion Priors Make Good Substitutes for Complex Reward Functions".