Week Ending 5.15.2022
RESEARCH WATCH: 5.15.2022
SPONSORED BY
This week was very active for "Computer Science - Artificial Intelligence", with 200 new papers.
The paper discussed most in the news over the past week was by a team at Google: "Building Machine Translation Systems for the Next Thousand Languages" by Ankur Bapna et al (May 2022), which was referenced 15 times, including in the article Google Translate gains 24 new languages from the Americas, India, and Africa in ZDNet. The paper got social media traction with 117 shares. The authors share findings from their effort to build practical machine translation (MT) systems capable of translating across over one thousand languages. A user, @iseeaswell, tweeted "Happy to finally be public about my main project over the last few years: adding more languages to Translate!", while @popular_ML observed "The most popular Arxiv link yesterday".
Leading researcher Oriol Vinyals (DeepMind) published "A Generalist Agent", which had 31 shares over the past 2 days. @HochreiterSepp tweeted "ArXiv Gato: a single generalist policy. Can play Atari, caption images, chat, stack blocks. Output determined by context (text, joint torques, buttons). 1.2B para. decoder-only transformer with 24 layers. Impressive results on control, robotics, language".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 253 new papers.
The paper discussed most in the news over the past week was by a team at Idiap Research Institute: "Are GAN-based Morphs Threatening Face Recognition?" by Eklavya Sarkar et al (May 2022), which was referenced 15 times, including in the article Perceptron: AI bias can arise from annotation instructions in TechCrunch. The paper got social media traction with 6 shares. A user, @summarizedml, tweeted "A dataset and code for four types of morphing attacks, including those that use StyleGAN 2 to generate synthetic morphs, which are 📄".
Leading researcher Luc Van Gool (Computer Vision Laboratory) came out with "A Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials" @summarizedml tweeted "A continual deepfake detection benchmark that simulates the real-world scenario and exploit multiple approaches to adapt them to the continual learning problem. 📄".
The paper shared the most on social media this week is by a team at Google: "Panoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation" by Abhijit Kundu et al (May 2022) with 148 shares. @ducha_aiki (Dmytro Mishkin 🇺🇦) tweeted "Panoptic Neural Fields: A Semantic Object-Aware Neural Scene Representation Abhijit Kundu et 8 al. tl;dr: Make NERF predict not only RGB, but also semseg, depth, instance seg. 1 MLP for background, several for dynamic obj. It proposes learned init. 1/2".
Over the past week, 29 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was by a team at Indiana University: "Manipulating Twitter Through Deletions" by Christopher Torres-Lugo et al (Mar 2022), which was referenced 90 times, including in the article Elon Musk says relaxing content rules on Twitter will boost free speech, but research shows otherwise in Tech Xplore. The paper got social media traction with 81 shares. The authors provide the first exhaustive, large - scale analysis of anomalous deletion patterns involving more than a billion deletions by over 11 million accounts. A user, @MLuczak, tweeted ""large-scale analysis of anomalous deletion patterns involving more than a billion deletions by over 11 million accounts.small fraction of accounts delete a large number of tweets daily. We also uncover two abusive behaviors that exploit deletions"".
The paper shared the most on social media this week is by a team at University College Dublin: "The Forgotten Margins of AI Ethics" by Abeba Birhane et al (May 2022) with 204 shares. @WellsLucasSanto (wells (oakland enby)) tweeted "Lots of really great insight from this paper about the state of ethics + justice in published papers at FAccT and AISES in the last few years. I particularly appreciate the section about ProPublica's COMPAS article, which we come back to time and time again (often uncritically)".
This week was very active for "Computer Science - Human-Computer Interaction", with 41 new papers.
This week was very active for "Computer Science - Learning", with 371 new papers.
The paper discussed most in the news over the past week was "OPT: Open Pre-trained Transformer Language Models" by Susan Zhang et al (May 2022), which was referenced 21 times, including in the article Facebook's new language model has 'high propensity to generate toxic language and reinforce harmful stereotypes' in Computing.co.uk. The paper author, Sameer Singh, was quoted saying "Disallowing commercial access completely or putting it behind a paywall may be the only way to justify, from a business perspective, why these companies should build and release LLMs in the first place". The paper got social media traction with 927 shares. On Twitter, @jonrjeffrey observed "Was just thinking it would be nice if some of OpenAI's models were open. Looks like Meta AI is beating OpenAI to the punch here for opening up a 175B parameter language model for researchers", while @loretoparisi observed "OPT-175B is comparable to #GPT3 while requiring only 1/7th the carbon footprint to develop. And... it’s #opensource 💥".
Leading researcher Oriol Vinyals (DeepMind) published "A Generalist Agent", which had 31 shares over the past 2 days. @HochreiterSepp tweeted "ArXiv Gato: a single generalist policy. Can play Atari, caption images, chat, stack blocks. Output determined by context (text, joint torques, buttons)
The paper shared the most on social media this week is by a team at Google: "Building Machine Translation Systems for the Next Thousand Languages" by Ankur Bapna et al (May 2022) with 117 shares. The authors share findings from their effort to build practical machine translation (MT) systems capable of translating across over one thousand languages. @popular_ML (Popular ML resources) tweeted "The most popular Arxiv link yesterday".
Over the past week, 15 new papers were published in "Computer Science - Multiagent Systems".
The paper discussed most in the news over the past week was "Best-Response Bayesian Reinforcement Learning with Bayes-adaptive POMDPs for Centaurs" by Mustafa Mert Çelikok et al (Apr 2022), which was referenced 1 time, including in the article Artificial intelligence can offset human frailties, leading to better decisions in Tech Xplore. The paper author, Mustafa Mert Çelikok, was quoted saying "grandmasters think they know better than AIs and override them when they disagree—that's their downfall". The paper was shared 1 time in social media. A Twitter user, @summarizedml, commented "A novel Bayesian best-response framework for centaurs where the AI's goal is to complement the human, but the machine must 📄".
Over the past week, 13 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was by a team at Microsoft: "Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems" by Paul Smolensky et al (May 2022), which was referenced 1 time, including in the article How Neurocompositional computing results in a new generation of AI systems in Analytics India Magazine. The paper also got the most social media traction with 217 shares. A user, @RTomMcCoy, tweeted "🤖🧠NEW PAPER🧠🤖 What explains the dramatic recent progress in AI? The standard answer is scale (more data & compute). But this misses a crucial factor: a new type of computation. Shorter opinion piece: Longer tutorial: 1/5".
This week was very active for "Computer Science - Robotics", with 75 new papers.
The paper discussed most in the news over the past week was by a team at DeepMind: "A Generalist Agent" by Scott Reed et al (May 2022), which was referenced 3 times, including in the article DeepMind's 'Gato' is mediocre, so why did they build it? in ZDNet. The paper author, Scott Reed (DeepMind), was quoted saying "With a single set of weights, Gato can engage in dialogue, caption images, stack blocks with a real robot arm, outperform humans at playing Atari games, navigate in simulated 3D environments, follow instructions, and more". The paper got social media traction with 100 shares. A Twitter user, @HochreiterSepp, said "ArXiv Gato: a single generalist policy. Can play Atari, caption images, chat, stack blocks. Output determined by context (text, joint torques, buttons). 1.2B para. decoder-only transformer with 24 layers. Impressive results on control, robotics, language".