Week Ending 12.19.2021
RESEARCH WATCH: 12.19.2021
This week was very active for "Computer Science - Artificial Intelligence", with 238 new papers.
The paper discussed most in the news over the past week was by a team at DeepMind: "Ethical and social risks of harm from Language Models" by Laura Weidinger et al (Dec 2021), which was referenced 6 times, including in the article This Week in Green AI #3 in Medium.com. Melanie Mitchell (Santa Fe Institute), who is not part of the study, said "[The] ways that we measure performance of these systems needs to be expanded … When the benchmarks are changed a little bit, they [often] don’t generalize well". The paper got social media traction with 56 shares. The authors aim to help structure the risk landscape associated with large - scale Language Models (LMs). A user, @TobyWalsh, tweeted "The first word is misplaced. The paper is not comprehensive. 30 pages of risks. 3 pages on possible means to mitigates those risks. If it were comprehensive, it would cover mitigation in as much detail as harms".
Leading researcher Jianfeng Gao (Microsoft) came out with "RegionCLIP: Region-based Language-Image Pretraining" @ak92501 tweeted "RegionCLIP: Region-based Language-Image Pretraining abs: When transferring pretrained model to the open-vocabulary object detection task, method outperforms the sota by 3.8 AP50 and 2.2 AP for novel categories on COCO and LVIS datasets".
The paper shared the most on social media this week is by a team at Stanford University: "Efficient Geometry-aware 3D Generative Adversarial Networks" by Eric R. Chan et al (Dec 2021) with 190 shares. The authors improve the computational efficiency and image quality of 3D GANs without overly relying on these approximations. @jongranskog (Jonathan Granskog) tweeted "Super clever generative NeRF work! It uses the powerful StyleGAN2 architecture to produce three axis-aligned feature maps that are then sampled during ray marching by projecting query points to these. Reminds me of how 3D objects can sometimes be approximated by sprite planes".
The most influential Twitter user discussing papers is AK who shared "Contextualized Spatio-Temporal Contrastive Learning with Self-Supervision" by Liangzhe Yuan et al (Dec 2021) and said: "Contextualized Spatio-Temporal Contrastive Learning with Self-Supervision abs: For spatio-temporal action localization, ConST-CL achieves 39.4% mAP with ground-truth boxes and 30.5% mAP with detected boxes on the AVA-Kinetics validation set".
This week was very active for "Computer Science - Computer Vision and Pattern Recognition", with 333 new papers.
The paper discussed most in the news over the past week was by a team at University of Edinburgh: "Film Trailer Generation via Task Decomposition" by Pinelopi Papalampidi et al (Nov 2021), which was referenced 8 times, including in the article When the AI creates the movie trailer in Exactrelease.org. The paper got social media traction with 6 shares. A Twitter user, @summarizedml, commented "We model movies as graphs, where nodes are shots and edges denote semantic relations between them. 📄".
Leading researcher Yann LeCun (New York University) came out with "Sparse Coding with Multi-Layer Decoders using Variance Regularization".
The paper shared the most on social media this week is by a team at Stanford University: "Efficient Geometry-aware 3D Generative Adversarial Networks" by Eric R. Chan et al (Dec 2021)
Over the past week, 26 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was "Where the Earth is flat and 9/11 is an inside job: A comparative algorithm audit of conspiratorial information in web search results" by Aleksandra Urman et al (Dec 2021), which was referenced 7 times, including in the article Science & Tech Google is the search engine that censors the most “conspiracy theories” in InfoWars. The paper also got the most social media traction with 251 shares. On Twitter, @danieljhicks said "I switched from DDG to a Bing wrapper a few months back because I noticed DDG was serving me a lot of right wing news sites. It's been more subtle with Bing, but still an issue. 😕", while @SamParkerSenate posted ""If a few people at Goolag don't like it, then it doesn't exist." One of the most evil companies to ever exist".
This week was active for "Computer Science - Human-Computer Interaction", with 30 new papers.
The paper discussed most in the news over the past week was by a team at Institute of Information Security: "Phishing in Organizations: Findings from a Large-Scale and Long-Term Study" by Daniele Lain et al (Dec 2021), which was referenced 3 times, including in the article Large-scale phishing study shows who bites the bait more often in Bleeping Computer. The paper also got the most social media traction with 57 shares. The authors present findings from a large - scale and long - term phishing experiment that they have conducted in collaboration with a partner company. A Twitter user, @SrdjanCapkun, commented "15 months, more than 14,000 participants How susceptible are employees of a large organization to phishing? Does training help? Can employees collectively detect phishing? Answers in this great work by (to appear in IEEE S&P 2022)".
This week was extremely active for "Computer Science - Learning", with 465 new papers.
The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier that Validates 301 New Exoplanets" by Hamed Valizadegan et al (Nov 2021), which was referenced 83 times, including in the article Hundreds of new exoplanets from Kepler data in EarthSky. The paper author, Hamed Valizadegan (Machine learning manager with the Universities Space Research Association at Ames), was quoted saying "When ExoMiner says something is a planet, you can be sure it’s a planet. ExoMiner is highly accurate and in some ways more reliable than both existing machine classifiers and the human experts it’s meant to emulate because of the biases that come with human labeling. Now that we’ve trained ExoMiner using Kepler data, with a little fine-tuning, we can transfer that learning to other missions, including TESS, which we’re currently working on. There’s room to grow." The paper got social media traction with 41 shares. A user, @storybywill, tweeted "We didn't. The team who developed ExoMine did, and here is their paper", while @arxiv_cs_LG said "ExoMiner: A Highly Accurate and Explainable Deep Learning Classifier to Mine Exoplanets. Valizadegan, Martinho, Wilkens, Jenkins, Smith, Caldwell, Twicken, Gerum, Walia, Hausknecht, Lubin, Bryson, and Oza".
Leading researcher Kyunghyun Cho (New York University) published "Amortized Noisy Channel Neural Machine Translation" @summarizedml tweeted "We aim to build an amortized noisy-channel NMT model such that greedily decoding from it would generate translations that max 📄".
The paper shared the most on social media this week is by a team at Stanford University: "Efficient Geometry-aware 3D Generative Adversarial Networks" by Eric R. Chan et al (Dec 2021)
Over the past week, 19 new papers were published in "Computer Science - Multiagent Systems".
Over the past week, 18 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was by a team at Central South University: "Learning by Active Forgetting for Neural Networks" by Jian Peng et al (Nov 2021), which was referenced 1 time, including in the article Best of arXiv.org for AI, Machine Learning, and Deep Learning – November 2021 in InsideBIGDATA. The paper got social media traction with 5 shares. A user, @gastronomy, tweeted "> Remembering and forgetting mechanisms are two sides of the same coin in a human learning-memory system. Inspired by human brain memory mechanisms, modern machine".
This week was active for "Computer Science - Robotics", with 59 new papers.
Leading researcher Sergey Levine (University of California, Berkeley) came out with "Autonomous Reinforcement Learning: Formalism and Benchmarking".