Week Ending 3.27.2022
RESEARCH WATCH: 3.27.2022
SPONSORED BY
This week was active for "Computer Science", with 1,381 new papers.
The paper discussed most in the news over the past week was by a team at Lawrence Berkeley National Laboratory: "FourCastNet: A Global Data-driven High-resolution Weather Model using Adaptive Fourier Neural Operators" by Jaideep Pathak et al (Feb 2022), which was referenced 26 times, including in the article Nvidia’s digital twin platform will change how scientists and engineers think in BusinessMayor.com. The paper author, Karthik Kashinath (Lawrence Berkeley National Laboratory), was quoted saying "Digital twins allow researchers and decision-makers to interact with data and rapidly explore what-if scenarios, which are nearly impossible with traditional modeling techniques because they're expensive and time consuming". The paper got social media traction with 71 shares. On Twitter, @hillbig posted "FourCastNet is a ViT-based global weather forecasting model that provides a week-long forecast at 0.25° resolution (30km x 30km) in less than 2 seconds (45000x faster than current simulators) and enables the use of thousands of ensemble members".
Leading researcher Yoshua Bengio (Université de Montréal) came out with "Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL" The researchers propose an alternative method that is able to recover, in a non - uniform - prior setting, the expressiveness and the desired properties of the Laplacian representation. @HochreiterSepp tweeted "ArXiv Laplacian representation (Laplacian eigenvect.) is learned by a new contrastive objective that contrasts transitions to non-transitions. Works for non-uniform prior of states. Learns skills to solve difficult navigation tasks with sparse rewards".
The paper shared the most on social media this week is "Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors" by Oran Gafni et al (Mar 2022) with 201 shares. @ak92501 (AK) tweeted "Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors abs: model achieves sota FID and human evaluation results, unlocking the ability to generate high fidelity images in a resolution of 512 × 512 pixels".
This week was extremely active for "Computer Science - Artificial Intelligence", with 258 new papers.
The paper discussed most in the news over the past week was by a team at Google: "AI system for fetal ultrasound in low-resource settings" by Ryan G. Gomes et al (Mar 2022), which was referenced 4 times, including in the article The Check Up: our latest health AI developments in Google. The paper was shared 4 times in social media. On Twitter, @yapp1e observed "AI system for fetal ultrasound in low-resource settings. Despite considerable progress in maternal healthcare, maternal and perinatal deaths remain high in low-to-middle income countries. Fetal ultrasound is an important com".
Leading researcher Ruslan Salakhutdinov (Carnegie Mellon University) published "PACS: A Dataset for Physical Audiovisual CommonSense Reasoning" @summarizedml tweeted "A new dataset for physical commonsense reasoning and evaluation of state-of-the-art models. 📄".
The paper shared the most on social media this week is "Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors" by Oran Gafni et al (Mar 2022)
This week was very active for "Computer Science - Computer Vision and Pattern Recognition", with 444 new papers.
The paper discussed most in the news over the past week was by a team at The University of Tokyo: "Robot peels banana with goal-conditioned dual-action deep imitation learning" by Heecheol Kim et al (Mar 2022), which was referenced 10 times, including in the article Watch a robot peel a banana without crushing it into oblivion in New Scientist. The paper got social media traction with 9 shares. On Twitter, @summarizedml said "This paper presents a goal-conditioned dual-action deep imitation learning method for dexterous robot manipulation tasks. 📄".
Leading researcher Ruslan Salakhutdinov (Carnegie Mellon University) published "PACS: A Dataset for Physical Audiovisual CommonSense Reasoning" @summarizedml tweeted
The paper shared the most on social media this week is "Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors" by Oran Gafni et al (Mar 2022)
Over the past week, 28 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was by a team at Cornell: "Characterizing Alternative Monetization Strategies on YouTube" by Yiqing Hua et al (Mar 2022), which was referenced 19 times, including in the article Demonetizing ‘problematic’ YouTubers isn’t effective, researchers say in Yahoo! News. The paper author, Yiqing Hua (Cornell), was quoted saying "We were surprised to discover how much money these creators are making from alternative monetization platforms". The paper got social media traction with 28 shares. The investigators focus on studying and characterizing these alternative monetization strategies. A Twitter user, @bramabramson, commented "Platforms may try to maximize the portion of user-generated content value chains they internalize. But that doesn't mean they outpace countervailing third-party siphoning efforts. (Which, no, I'm not going to muse about how that all intersects with #C11's proposed 4.2(3)(a).)".
This week was very active for "Computer Science - Human-Computer Interaction", with 39 new papers.
The paper discussed most in the news over the past week was "A Distance Matters Paradox: Facilitating Intra-Team Collaboration Can Harm Inter-Team Collaboration" by Xinlan Emily Hu et al (Feb 2022), which was referenced 2 times, including in the article Hybrid Works for Teams, But Throws a Wrench in Cross-Functional Teamwork. Here's How to Come Out on Top. in Inc.com. The paper was shared 2 times in social media.
This week was very active for "Computer Science - Learning", with 420 new papers.
The paper discussed most in the news over the past week was by a team at Lawrence Berkeley National Laboratory: "FourCastNet: A Global Data-driven High-resolution Weather Model using Adaptive Fourier Neural Operators" by Jaideep Pathak et al (Feb 2022)
Leading researcher Yoshua Bengio (Université de Montréal) came out with "Temporal Abstractions-Augmented Temporally Contrastive Learning: An Alternative to the Laplacian in RL" The investigators propose an alternative method that is able to recover, in a non - uniform - prior setting, the expressiveness and the desired properties of the Laplacian representation. @HochreiterSepp tweeted "ArXiv Laplacian representation (Laplacian eigenvect.) is learned by a new contrastive objective that contrasts transitions to non-transitions. Works for non-uniform prior of states. Learns skills to solve difficult navigation tasks with sparse rewards".
The paper shared the most on social media this week is "Make-A-Scene: Scene-Based Text-to-Image Generation with Human Priors" by Oran Gafni et al (Mar 2022)
Over the past week, 16 new papers were published in "Computer Science - Multiagent Systems".
The paper shared the most on social media this week is "Distributing Collaborative Multi-Robot Planning with Gaussian Belief Propagation" by Aalok Patwardhan et al (Mar 2022) with 79 shares. @code_star (Cody Blakeney) tweeted "This makes me incredibly uncomfortable for some reason. Even if the failure rate is incredibly low I would hate to be in a car with a corrupted packet or something".
This week was active for "Computer Science - Neural and Evolutionary Computing", with 44 new papers.
The paper discussed most in the news over the past week was by a team at OpenAI: "Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer" by Greg Yang et al (Mar 2022), which was referenced 2 times, including in the article Interview with the team behind Microsoft’s µTransfer in Analytics India Magazine. The paper author, Greg Yang (Microsoft), was quoted saying "You can’t train GPT-3 on a single GPU, much less tune its hyperparameters (HPs). But what if I tell you you *can* tune its HPs on a single GPU thanks to new theoretical advances?". The paper also got the most social media traction with 472 shares. On Twitter, @edwardjhu said "Better hyperparameters🚀= bigger bang for the buck💸! What if the next trillion-parameter model could be tuned by running a tiny one w millions of params instead? Our technique, μTransfer, enables that by aligning the optimal HPs across model sizes".
The paper shared the most on social media this week is by a team at Stanford University: "MetaMorph: Learning Universal Controllers with Transformers" by Agrim Gupta et al (Mar 2022) with 101 shares. @mattbeane (Matt Beane) tweeted "Training robots is hard. Standard way is an "arm farm": your AI controls many identical robots and their work is training data for next time. Simulation has been pretty bad for this. And neither handles even a tiny change in robot design. This could be a new way forward".
This week was very active for "Computer Science - Robotics", with 88 new papers.
The paper discussed most in the news over the past week was by a team at The University of Tokyo: "Robot peels banana with goal-conditioned dual-action deep imitation learning" by Heecheol Kim et al (Mar 2022)
Leading researcher Abhinav Gupta (Carnegie Mellon University) published "R3M: A Universal Visual Representation for Robot Manipulation" @popular_ML tweeted "The most popular Arxiv link yesterday". This paper was also shared the most on social media with 110 tweets. @popular_ML (Popular ML resources) tweeted "The most popular Arxiv link yesterday".