Week Ending 4.17.2022

 

RESEARCH WATCH: 4.17.2022

SPONSORED BY

 

This week was active for "Computer Science", with 1,198 new papers.

This week was very active for "Computer Science - Artificial Intelligence", with 212 new papers.

  • The paper discussed most in the news over the past week was by a team at DeepMind: "Can language models learn from explanations in context?" by Andrew K. Lampinen et al (Apr 2022), which was referenced 12 times, including in the article Deep Science: Combining vision and language could be the key to more capable AI in Yahoo! News. The paper got social media traction with 89 shares. A user, @PaperTldr, tweeted "🗜90% Language models can perform new tasks by adapting to a few in-context examples- humans can benefit from rapid learning from examples that connect examples to task principles".

  • Leading researcher Pieter Abbeel (UC Berkeley) came out with "Sim-to-Real 6D Object Pose Estimation via Iterative Self-training for Robotic Bin-picking" The authors propose an iterative self - training framework for sim - to - real 6D object pose estimation to facilitate cost - effective robotic grasping. @AjdDavison tweeted "This new work lines up with my view that explicit 3D scene understanding is the key powerful, general manipulation. If you have good object models, estimate their pose explicitly. Future networked robots will surely have access to huge, ever-growing object databases".

  • The paper shared the most on social media this week is "A Review on Language Models as Knowledge Bases" by Badr AlKhamissi et al (Apr 2022) with 105 shares. @morris_phd (AI News Clips by Morris Lee: News to help your R&D) tweeted "Twitter Large language models have lots of implicit knowledge. PDF Newsletter More story LinkedIn #AINewsClips #ML #ArtificialIntelligence #MachineLearning".

This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 301 new papers.

Over the past week, 19 new papers were published in "Computer Science - Computers and Society".

  • The paper discussed most in the news over the past week was by a team at Cornell: "Characterizing Alternative Monetization Strategies on YouTube" by Yiqing Hua et al (Mar 2022), which was referenced 23 times, including in the article Why So Many YouTube and TikTok Stars Want to Sell You a Shirt (And Maybe a Burger) in MSN United States. The paper author, Yiqing Hua (Cornell), was quoted saying "We were surprised to discover how much money these creators are making from alternative monetization platforms". The paper got social media traction with 35 shares. The authors focus on studying and characterizing these alternative monetization strategies. On Twitter, @bramabramson observed "Platforms may try to maximize the portion of user-generated content value chains they internalize. But that doesn't mean they outpace countervailing third-party siphoning efforts. (Which, no, I'm not going to muse about how that all intersects with #C11's proposed 4.2(3)(a).)".

This week was very active for "Computer Science - Human-Computer Interaction", with 42 new papers.

  • The paper discussed most in the news over the past week was "A Performance Evaluation of Nomon: A Flexible Interface for Noisy Single-Switch Users" by Nicholas Bonaker et al (Apr 2022), which was referenced 1 time, including in the article New System Speeds Up Typing for the Motor Impaired in Medgadget.com. The paper author, Tamara Broderick (Massachusetts Institute of Technology), was quoted saying "So far, the feedback from motor-impaired users has been invaluable to us; we’re very grateful to the motor-impaired user who commented on our initial interface and the separate motor-impaired user who participated in our study." The paper was shared 1 time in social media. On Twitter, @ta_broderick commented "Nomon uses Bayesian inference to make the most of the limited information we get from users. You can find more info and try it out yourself at ! Our new paper is slated to appear in #CHI2022, and a preprint is available at".

This week was very active for "Computer Science - Learning", with 373 new papers.

Over the past week, 11 new papers were published in "Computer Science - Multiagent Systems".

This week was active for "Computer Science - Neural and Evolutionary Computing", with 37 new papers.

  • The paper discussed most in the news over the past week was by a team at Google: "Practical tradeoffs between memory, compute, and performance in learned optimizers" by Luke Metz et al (Mar 2022), which was referenced 1 time, including in the article Training Learned Optimizers in Medium.com. The paper got social media traction with 85 shares. A Twitter user, @Luke_Metz, commented "Memory, compute, & perf tradeoff in learned optimizers Learned optimizers replace hand designed rules like SGD/Adam with learned functions, ie neural net which takes transformed gradients as inputs + outputs weight updates. How do we design this NN? 1/5".

  • The paper shared the most on social media this week is by a team at Google: "Evolving Modular Soft Robots without Explicit Inter-Module Communication using Local Self-Attention" by Federico Pigozzi et al (Apr 2022) with 77 shares. The authors focus on Voxel - based Soft Robots (VSRs), aggregations of mechanically identical elastic blocks. @hardmaru (hardmaru) tweeted "Without any message passing, it’s a bit surprising that this worked at all! The solution is that a self-attention controller is needed for each module to effectively process local environmental inputs. This will be presented at #GECCO2022 as a full paper".

This week was active for "Computer Science - Robotics", with 64 new papers.


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.