Week Ending 12.19.2021

 

RESEARCH WATCH: 12.19.2021

 

This week was very active for "Computer Science - Artificial Intelligence", with 238 new papers.

  • The paper discussed most in the news over the past week was by a team at DeepMind: "Ethical and social risks of harm from Language Models" by Laura Weidinger et al (Dec 2021), which was referenced 6 times, including in the article This Week in Green AI #3 in Medium.com. Melanie Mitchell (Santa Fe Institute), who is not part of the study, said "[The] ways that we measure performance of these systems needs to be expanded … When the benchmarks are changed a little bit, they [often] don’t generalize well". The paper got social media traction with 56 shares. The authors aim to help structure the risk landscape associated with large - scale Language Models (LMs). A user, @TobyWalsh, tweeted "The first word is misplaced. The paper is not comprehensive. 30 pages of risks. 3 pages on possible means to mitigates those risks. If it were comprehensive, it would cover mitigation in as much detail as harms".

  • Leading researcher Jianfeng Gao (Microsoft) came out with "RegionCLIP: Region-based Language-Image Pretraining" @ak92501 tweeted "RegionCLIP: Region-based Language-Image Pretraining abs: When transferring pretrained model to the open-vocabulary object detection task, method outperforms the sota by 3.8 AP50 and 2.2 AP for novel categories on COCO and LVIS datasets".

  • The paper shared the most on social media this week is by a team at Stanford University: "Efficient Geometry-aware 3D Generative Adversarial Networks" by Eric R. Chan et al (Dec 2021) with 190 shares. The authors improve the computational efficiency and image quality of 3D GANs without overly relying on these approximations. @jongranskog (Jonathan Granskog) tweeted "Super clever generative NeRF work! It uses the powerful StyleGAN2 architecture to produce three axis-aligned feature maps that are then sampled during ray marching by projecting query points to these. Reminds me of how 3D objects can sometimes be approximated by sprite planes".

  • The most influential Twitter user discussing papers is AK who shared "Contextualized Spatio-Temporal Contrastive Learning with Self-Supervision" by Liangzhe Yuan et al (Dec 2021) and said: "Contextualized Spatio-Temporal Contrastive Learning with Self-Supervision abs: For spatio-temporal action localization, ConST-CL achieves 39.4% mAP with ground-truth boxes and 30.5% mAP with detected boxes on the AVA-Kinetics validation set".

This week was very active for "Computer Science - Computer Vision and Pattern Recognition", with 333 new papers.

Over the past week, 26 new papers were published in "Computer Science - Computers and Society".

This week was active for "Computer Science - Human-Computer Interaction", with 30 new papers.

  • The paper discussed most in the news over the past week was by a team at Institute of Information Security: "Phishing in Organizations: Findings from a Large-Scale and Long-Term Study" by Daniele Lain et al (Dec 2021), which was referenced 3 times, including in the article Large-scale phishing study shows who bites the bait more often in Bleeping Computer. The paper also got the most social media traction with 57 shares. The authors present findings from a large - scale and long - term phishing experiment that they have conducted in collaboration with a partner company. A Twitter user, @SrdjanCapkun, commented "15 months, more than 14,000 participants How susceptible are employees of a large organization to phishing? Does training help? Can employees collectively detect phishing? Answers in this great work by (to appear in IEEE S&P 2022)".

This week was extremely active for "Computer Science - Learning", with 465 new papers.

  • The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier that Validates 301 New Exoplanets" by Hamed Valizadegan et al (Nov 2021), which was referenced 83 times, including in the article Hundreds of new exoplanets from Kepler data in EarthSky. The paper author, Hamed Valizadegan (Machine learning manager with the Universities Space Research Association at Ames), was quoted saying "When ExoMiner says something is a planet, you can be sure it’s a planet. ExoMiner is highly accurate and in some ways more reliable than both existing machine classifiers and the human experts it’s meant to emulate because of the biases that come with human labeling. Now that we’ve trained ExoMiner using Kepler data, with a little fine-tuning, we can transfer that learning to other missions, including TESS, which we’re currently working on. There’s room to grow." The paper got social media traction with 41 shares. A user, @storybywill, tweeted "We didn't. The team who developed ExoMine did, and here is their paper", while @arxiv_cs_LG said "ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier to Mine Exoplanets. Valizadegan, Martinho, Wilkens, Jenkins, Smith, Caldwell, Twicken, Gerum, Walia, Hausknecht, Lubin, Bryson, and Oza".

  • Leading researcher Kyunghyun Cho (New York University) published "Amortized Noisy Channel Neural Machine Translation" @summarizedml tweeted "We aim to build an amortized noisy-channel NMT model such that greedily decoding from it would generate translations that max 📄".

  • The paper shared the most on social media this week is by a team at Stanford University: "Efficient Geometry-aware 3D Generative Adversarial Networks" by Eric R. Chan et al (Dec 2021)

Over the past week, 19 new papers were published in "Computer Science - Multiagent Systems".

Over the past week, 18 new papers were published in "Computer Science - Neural and Evolutionary Computing".

This week was active for "Computer Science - Robotics", with 59 new papers.


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.