Week Ending 4.10.2022
RESEARCH WATCH: 4.10.2022
SPONSORED BY
This week was very active for "Computer Science - Artificial Intelligence", with 214 new papers.
The paper discussed most in the news over the past week was by a team at DeepMind: "Can language models learn from explanations in context?" by Andrew K. Lampinen et al (Apr 2022), which was referenced 12 times, including in the article Deep Science: Combining vision and language could be the key to more capable AI in Yahoo! News. The paper got social media traction with 87 shares. A user, @summarizedml, tweeted "We find that explanations of few-shot examples can improve performance on challenging tasks. 📄", while @kushin_m posted "Really elegant idea! Particularly liked the rigorous ‘control’ explanation types to ensure there was some real signal in these results".
Leading researcher Pieter Abbeel (UC Berkeley) came out with "Imitating, Fast and Slow: Robust learning from demonstrations via decision-time planning".
The paper shared the most on social media this week is by a team at Google: "Video Diffusion Models" by Jonathan Ho et al (Apr 2022) with 197 shares. @JackRadford95 (Jack.Radford) tweeted "Cool to see these move to time-domain! 😎😎".
This week was very active for "Computer Science - Computer Vision and Pattern Recognition", with 381 new papers.
The paper discussed most in the news over the past week was by a team at The University of Tokyo: "Robot peels banana with goal-conditioned dual-action deep imitation learning" by Heecheol Kim et al (Mar 2022), which was referenced 14 times, including in the article Researchers Use Imitation to Teach a Robot How to Peel a Banana in PCMag UK. The paper got social media traction with 11 shares. A Twitter user, @summarizedml, said "This paper presents a goal-conditioned dual-action deep imitation learning method for dexterous robot manipulation tasks. 📄", while @IFLScience commented "You can read more about it in the pre-print paper posted on arXiv".
Leading researcher Pieter Abbeel (UC Berkeley) published "Coarse-to-Fine Q-attention with Learned Path Ranking" @summarizedml tweeted "Learned Path Ranking (LPR), a method that accepts an end-effector goal pose, and learns to rank a set of goal 📄".
The paper shared the most on social media this week is by a team at NYU: "The Effects of Regularization and Data Augmentation are Class Dependent" by Randall Balestriero et al (Apr 2022) with 218 shares. The investigators demonstrate that techniques such as DA or weight decay produce a model with a reduced complexity that is unfair across classes. @rcsaxe (Ryan Saxe) tweeted "It’s nice to see a study to put strength behind intuition: Augmentation is a prior that says “if I perturb my input with a specific transform F, the resulting data point is from the same class”. IMO It’s natural that some transforms aren’t as compatible with some classes".
This week was active for "Computer Science - Computers and Society", with 32 new papers.
The paper discussed most in the news over the past week was by a team at University of Oxford: "Goodbye Tracking? Impact of iOS App Tracking Transparency and Privacy Labels" by Konrad Kollnig et al (Apr 2022), which was referenced 14 times, including in the article Study of Apple’s ATT impact highlights competition concerns in TechCrunch. The paper got social media traction with 124 shares. On Twitter, @RDBinns observed "One of the worrying details from this research was the circumvention of Apple's privacy controls by trackers. When you 'Ask App Not to Track', iOS stops third parties accessing the advertising ID. So some resort to fingerprinting instead. One example is Alibaba-owned Umeng".
This week was very active for "Computer Science - Human-Computer Interaction", with 39 new papers.
This week was very active for "Computer Science - Learning", with 388 new papers.
The paper discussed most in the news over the past week was by a team at Lawrence Berkeley National Laboratory: "FourCastNet: A Global Data-driven High-resolution Weather Model using Adaptive Fourier Neural Operators" by Jaideep Pathak et al (Feb 2022), which was referenced 27 times, including in the article Newsletter #66 — Nvidia upgrades its Omniverse in Medium.com. The paper author, Karthik Kashinath (Lawrence Berkeley National Laboratory), was quoted saying "Digital twins allow researchers and decision-makers to interact with data and rapidly explore what-if scenarios, which are nearly impossible with traditional modeling techniques because they're expensive and time consuming". The paper got social media traction with 76 shares. A user, @hillbig, tweeted "FourCastNet is a ViT-based global weather forecasting model that provides a week-long forecast at 0.25° resolution (30km x 30km) in less than 2 seconds (45000x faster than current simulators) and enables the use of thousands of ensemble members".
Leading researcher Pieter Abbeel (UC Berkeley) came out with "Coarse-to-Fine Q-attention with Learned Path Ranking" @summarizedml tweeted "Learned Path Ranking (LPR), a method that accepts an end-effector goal pose, and learns to rank a set of goal 📄".
The paper shared the most on social media this week is by a team at NYU: "The Effects of Regularization and Data Augmentation are Class Dependent" by Randall Balestriero et al (Apr 2022)
Over the past week, 13 new papers were published in "Computer Science - Multiagent Systems".
Leading researcher Luc Van Gool (Computer Vision Laboratory) came out with "Deep Interactive Motion Prediction and Planning: Playing Games with Motion Prediction Models".
This week was active for "Computer Science - Neural and Evolutionary Computing", with 36 new papers.
This week was very active for "Computer Science - Robotics", with 74 new papers.
The paper discussed most in the news over the past week was by a team at The University of Tokyo: "Robot peels banana with goal-conditioned dual-action deep imitation learning" by Heecheol Kim et al (Mar 2022)
Leading researcher Pieter Abbeel (UC Berkeley) came out with "Coarse-to-Fine Q-attention with Learned Path Ranking" @summarizedml tweeted "Learned Path Ranking (LPR)
The paper shared the most on social media this week is by a team at Stanford University: "ObjectFolder 2.0: A Multisensory Object Dataset for Sim2Real Transfer" by Ruohan Gao et al (Apr 2022) with 53 shares. @summarizedml (SummarizedML) tweeted "ObjectFolder 2.0, a large-scale, multisensory dataset that introduces 100 virtualized objects with visual, acoustic, 📄".