Week Ending 4.17.2022
RESEARCH WATCH: 4.17.2022
SPONSORED BY
This week was active for "Computer Science", with 1,198 new papers.
The paper discussed most in the news over the past week was by a team at Institute of Acoustics: "Open Source MagicData-RAMC: A Rich Annotated Mandarin Conversational(RAMC) Speech Dataset" by Zehui Yang et al (Mar 2022), which was referenced 68 times, including in the article Open-Source MagicData-RAMC: 180-Hour Conversational Speech Dataset in Mandarin Released in Street Insider. The paper got social media traction with 14 shares. The authors introduce a high - quality rich annotated Mandarin conversational (RAMC) speech dataset called MagicData - RAMC. A user, @Magic_Data_Tech, tweeted "Save time on data discovery and prep. Get open-source 180-hour #MagicData-RAMC conversational speech dataset in Mandarin on MagicHub for free! 🚀Download: 📖Research: 🏆Baseline: #machinelearning".
Leading researcher Kyunghyun Cho (New York University) came out with "Separating the World and Ego Models for Self-Driving".
The paper shared the most on social media this week is by a team at Google: "What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?" by Thomas Wang et al (Apr 2022) with 158 shares. The researchers present a large - scale evaluation of modeling choices and their impact on zero - shot generalization. @BigScienceLLM (BigScience Large Model Training) tweeted "🧐 When targeting zero-shot use, should you train a T5, a PrefixLM, or a GPT? What if you plan to leverage multitask finetuning (à la T0)? 🤩 In we explore how architectures & pretraining objectives impact zero-shot performance. ⬇️ Thread time!".
This week was very active for "Computer Science - Artificial Intelligence", with 212 new papers.
The paper discussed most in the news over the past week was by a team at DeepMind: "Can language models learn from explanations in context?" by Andrew K. Lampinen et al (Apr 2022), which was referenced 12 times, including in the article Deep Science: Combining vision and language could be the key to more capable AI in Yahoo! News. The paper got social media traction with 89 shares. A user, @PaperTldr, tweeted "🗜90% Language models can perform new tasks by adapting to a few in-context examples- humans can benefit from rapid learning from examples that connect examples to task principles".
Leading researcher Pieter Abbeel (UC Berkeley) came out with "Sim-to-Real 6D Object Pose Estimation via Iterative Self-training for Robotic Bin-picking" The authors propose an iterative self - training framework for sim - to - real 6D object pose estimation to facilitate cost - effective robotic grasping. @AjdDavison tweeted "This new work lines up with my view that explicit 3D scene understanding is the key powerful, general manipulation. If you have good object models, estimate their pose explicitly. Future networked robots will surely have access to huge, ever-growing object databases".
The paper shared the most on social media this week is "A Review on Language Models as Knowledge Bases" by Badr AlKhamissi et al (Apr 2022) with 105 shares. @morris_phd (AI News Clips by Morris Lee: News to help your R&D) tweeted "Twitter Large language models have lots of implicit knowledge. PDF Newsletter More story LinkedIn #AINewsClips #ML #ArtificialIntelligence #MachineLearning".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 301 new papers.
The paper discussed most in the news over the past week was by a team at OpenAI: "Hierarchical Text-Conditional Image Generation with CLIP Latents" by Aditya Ramesh et al (Apr 2022), which was referenced 2 times, including in the article OpenAI creates Dall-E, an AI that can create art from basic descriptions in Silicon Republic. The paper got social media traction with 6 shares. A user, @summarizedml, tweeted "A two-stage model that uses CLIP image representations for image generation, and use them for language-guided image manipulations. 📄".
Leading researcher Pieter Abbeel (UC Berkeley) came out with "Sim-to-Real 6D Object Pose Estimation via Iterative Self-training for Robotic Bin-picking" The authors propose an iterative self - training framework for sim - to - real 6D object pose estimation to facilitate cost - effective robotic grasping. @AjdDavison tweeted
The paper shared the most on social media this week is "GARF: Gaussian Activated Radiance Fields for High Fidelity Reconstruction and Pose Estimation" by Shin-Fang Chng et al (Apr 2022) with 127 shares. @hillbig (Daisuke Okanohara) tweeted "GARF uses Gaussian activation in NeRF and removes positional encoding. Unlike SIREN, GARF is robust against random initialization and significantly improves the reconstruction and pose estimation quality and robustness".
Over the past week, 19 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was by a team at Cornell: "Characterizing Alternative Monetization Strategies on YouTube" by Yiqing Hua et al (Mar 2022), which was referenced 23 times, including in the article Why So Many YouTube and TikTok Stars Want to Sell You a Shirt (And Maybe a Burger) in MSN United States. The paper author, Yiqing Hua (Cornell), was quoted saying "We were surprised to discover how much money these creators are making from alternative monetization platforms". The paper got social media traction with 35 shares. The authors focus on studying and characterizing these alternative monetization strategies. On Twitter, @bramabramson observed "Platforms may try to maximize the portion of user-generated content value chains they internalize. But that doesn't mean they outpace countervailing third-party siphoning efforts. (Which, no, I'm not going to muse about how that all intersects with #C11's proposed 4.2(3)(a).)".
This week was very active for "Computer Science - Human-Computer Interaction", with 42 new papers.
The paper discussed most in the news over the past week was "A Performance Evaluation of Nomon: A Flexible Interface for Noisy Single-Switch Users" by Nicholas Bonaker et al (Apr 2022), which was referenced 1 time, including in the article New System Speeds Up Typing for the Motor Impaired in Medgadget.com. The paper author, Tamara Broderick (Massachusetts Institute of Technology), was quoted saying "So far, the feedback from motor-impaired users has been invaluable to us; we’re very grateful to the motor-impaired user who commented on our initial interface and the separate motor-impaired user who participated in our study." The paper was shared 1 time in social media. On Twitter, @ta_broderick commented "Nomon uses Bayesian inference to make the most of the limited information we get from users. You can find more info and try it out yourself at ! Our new paper is slated to appear in #CHI2022, and a preprint is available at".
This week was very active for "Computer Science - Learning", with 373 new papers.
The paper discussed most in the news over the past week was by a team at DeepMind: "Can language models learn from explanations in context?" by Andrew K. Lampinen et al (Apr 2022)
Leading researcher Pieter Abbeel (UC Berkeley) came out with "Sim-to-Real 6D Object Pose Estimation via Iterative Self-training for Robotic Bin-picking" The authors propose an iterative self - training framework for sim - to - real 6D object pose estimation to facilitate cost - effective robotic grasping. @AjdDavison tweeted
The paper shared the most on social media this week is by a team at Google: "What Language Model Architecture and Pretraining Objective Work Best for Zero-Shot Generalization?" by Thomas Wang et al (Apr 2022)
Over the past week, 11 new papers were published in "Computer Science - Multiagent Systems".
This week was active for "Computer Science - Neural and Evolutionary Computing", with 37 new papers.
The paper discussed most in the news over the past week was by a team at Google: "Practical tradeoffs between memory, compute, and performance in learned optimizers" by Luke Metz et al (Mar 2022), which was referenced 1 time, including in the article Training Learned Optimizers in Medium.com. The paper got social media traction with 85 shares. A Twitter user, @Luke_Metz, commented "Memory, compute, & perf tradeoff in learned optimizers Learned optimizers replace hand designed rules like SGD/Adam with learned functions, ie neural net which takes transformed gradients as inputs + outputs weight updates. How do we design this NN? 1/5".
The paper shared the most on social media this week is by a team at Google: "Evolving Modular Soft Robots without Explicit Inter-Module Communication using Local Self-Attention" by Federico Pigozzi et al (Apr 2022) with 77 shares. The authors focus on Voxel - based Soft Robots (VSRs), aggregations of mechanically identical elastic blocks. @hardmaru (hardmaru) tweeted "Without any message passing, it’s a bit surprising that this worked at all! The solution is that a self-attention controller is needed for each module to effectively process local environmental inputs. This will be presented at #GECCO2022 as a full paper".
This week was active for "Computer Science - Robotics", with 64 new papers.
The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "GelSight Fin Ray: Incorporating Tactile Sensing into a Soft Compliant Robotic Gripper" by Sandra Q. Liu et al (Apr 2022), which was referenced 7 times, including in the article A flexible way to grab items with feeling in MIT News. The paper author, Sandra Liu, was quoted saying "It’s versatile because it can passively adapt to different shapes and therefore grasp a variety of objects". Wenzhen Yuan (Carnegie Mellon University), who is not part of the study, said "Sensing with soft robots has been a big challenge, because it is difficult to set up sensors — which are traditionally rigid — on soft bodies". The paper was shared 1 time in social media.
Leading researcher Kyunghyun Cho (New York University) published "Separating the World and Ego Models for Self-Driving".
The paper shared the most on social media this week is by a team at Google: "Evolving Modular Soft Robots without Explicit Inter-Module Communication using Local Self-Attention" by Federico Pigozzi et al (Apr 2022)
The most influential Twitter user discussing papers is Francis Villatoro who shared "What is the $i\varepsilon$ for the S-matrix?" by Holmfridur S. Hannesdottir et al (Apr 2022) and said: "What is the iε for the S-matrix?Can the S-matrix be complexified in a way consistent with causality? An iε-like prescription for deforming branch cuts in the space of Mandelstam invariants without modifying the analytic properties".