Eye On AI

View Original

Week Ending 5.15.2022

RESEARCH WATCH: 5.15.2022

SPONSORED BY

Clear.ML is an open-source MLOps solution. Whether you're a Data Engineer, ML engineer, DevOps, or a Data Scientist, ClearML is hands-down the best collaborative MLOps tool with full visibility and extensibility.

This week was very active for "Computer Science - Artificial Intelligence", with 200 new papers.

  • The paper discussed most in the news over the past week was by a team at Google: "Building Machine Translation Systems for the Next Thousand Languages" by Ankur Bapna et al (May 2022), which was referenced 15 times, including in the article Google Translate gains 24 new languages from the Americas, India, and Africa in ZDNet. The paper got social media traction with 117 shares. The authors share findings from their effort to build practical machine translation (MT) systems capable of translating across over one thousand languages. A user, @iseeaswell, tweeted "Happy to finally be public about my main project over the last few years: adding more languages to Translate!", while @popular_ML observed "The most popular Arxiv link yesterday".

  • Leading researcher Oriol Vinyals (DeepMind) published "A Generalist Agent", which had 31 shares over the past 2 days. @HochreiterSepp tweeted "ArXiv Gato: a single generalist policy. Can play Atari, caption images, chat, stack blocks. Output determined by context (text, joint torques, buttons). 1.2B para. decoder-only transformer with 24 layers. Impressive results on control, robotics, language".

This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 253 new papers.

Over the past week, 29 new papers were published in "Computer Science - Computers and Society".

  • The paper discussed most in the news over the past week was by a team at Indiana University: "Manipulating Twitter Through Deletions" by Christopher Torres-Lugo et al (Mar 2022), which was referenced 90 times, including in the article Elon Musk says relaxing content rules on Twitter will boost free speech, but research shows otherwise in Tech Xplore. The paper got social media traction with 81 shares. The authors provide the first exhaustive, large - scale analysis of anomalous deletion patterns involving more than a billion deletions by over 11 million accounts. A user, @MLuczak, tweeted ""large-scale analysis of anomalous deletion patterns involving more than a billion deletions by over 11 million accounts.small fraction of accounts delete a large number of tweets daily. We also uncover two abusive behaviors that exploit deletions"".

  • The paper shared the most on social media this week is by a team at University College Dublin: "The Forgotten Margins of AI Ethics" by Abeba Birhane et al (May 2022) with 204 shares. @WellsLucasSanto (wells (oakland enby)) tweeted "Lots of really great insight from this paper about the state of ethics + justice in published papers at FAccT and AISES in the last few years. I particularly appreciate the section about ProPublica's COMPAS article, which we come back to time and time again (often uncritically)".

This week was very active for "Computer Science - Human-Computer Interaction", with 41 new papers.

This week was very active for "Computer Science - Learning", with 371 new papers.

  • The paper discussed most in the news over the past week was "OPT: Open Pre-trained Transformer Language Models" by Susan Zhang et al (May 2022), which was referenced 21 times, including in the article Facebook's new language model has 'high propensity to generate toxic language and reinforce harmful stereotypes' in Computing.co.uk. The paper author, Sameer Singh, was quoted saying "Disallowing commercial access completely or putting it behind a paywall may be the only way to justify, from a business perspective, why these companies should build and release LLMs in the first place". The paper got social media traction with 927 shares. On Twitter, @jonrjeffrey observed "Was just thinking it would be nice if some of OpenAI's models were open. Looks like Meta AI is beating OpenAI to the punch here for opening up a 175B parameter language model for researchers", while @loretoparisi observed "OPT-175B is comparable to #GPT3 while requiring only 1/7th the carbon footprint to develop. And... it’s #opensource 💥".

  • Leading researcher Oriol Vinyals (DeepMind) published "A Generalist Agent", which had 31 shares over the past 2 days. @HochreiterSepp tweeted "ArXiv Gato: a single generalist policy. Can play Atari, caption images, chat, stack blocks. Output determined by context (text, joint torques, buttons)

  • The paper shared the most on social media this week is by a team at Google: "Building Machine Translation Systems for the Next Thousand Languages" by Ankur Bapna et al (May 2022) with 117 shares. The authors share findings from their effort to build practical machine translation (MT) systems capable of translating across over one thousand languages. @popular_ML (Popular ML resources) tweeted "The most popular Arxiv link yesterday".

Over the past week, 15 new papers were published in "Computer Science - Multiagent Systems".

Over the past week, 13 new papers were published in "Computer Science - Neural and Evolutionary Computing".

This week was very active for "Computer Science - Robotics", with 75 new papers.

  • The paper discussed most in the news over the past week was by a team at DeepMind: "A Generalist Agent" by Scott Reed et al (May 2022), which was referenced 3 times, including in the article DeepMind's 'Gato' is mediocre, so why did they build it? in ZDNet. The paper author, Scott Reed (DeepMind), was quoted saying "With a single set of weights, Gato can engage in dialogue, caption images, stack blocks with a real robot arm, outperform humans at playing Atari games, navigate in simulated 3D environments, follow instructions, and more". The paper got social media traction with 100 shares. A Twitter user, @HochreiterSepp, said "ArXiv Gato: a single generalist policy. Can play Atari, caption images, chat, stack blocks. Output determined by context (text, joint torques, buttons). 1.2B para. decoder-only transformer with 24 layers. Impressive results on control, robotics, language".


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.