Eye On AI

View Original

Week Ending 5.8.2022

RESEARCH WATCH: 5.8.2022

SPONSORED BY

Clear.ML is an open-source MLOps solution. Whether you're a Data Engineer, ML engineer, DevOps, or a Data Scientist, ClearML is hands-down the best collaborative MLOps tool with full visibility and extensibility.

This week was active for "Computer Science", with 1,232 new papers.

This week was very active for "Computer Science - Artificial Intelligence", with 221 new papers.

  • The paper discussed most in the news over the past week was by a team at Arizona State University: "Dont Blame the Annotator: Bias Already Starts in the Annotation Instructions" by Mihir Parmar et al (May 2022), which was referenced 13 times, including in the article Perceptron: AI bias can arise from annotation instructions in TechCrunch. The paper got social media traction with 9 shares. The authors hypothesize that annotators pick up on patterns in the crowdsourcing instructions, which bias them to write similar examples that are then over - represented in the collected data. A user, @summarizedml, tweeted "We study the influence of instruction bias in NLU benchmarks, showing that crowdsourcing instructions often exhibit concrete patterns, which are propagated by crowdworkers 📄".

  • Leading researcher Sergey Levine (University of California, Berkeley) came out with "ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters", which had 21 shares over the past 3 days. The researchers present a large - scale data - driven framework for learning versatile and reusable skill embeddings for physically simulated characters. @summarizedml tweeted "A data-driven framework for learning versatile and reusable skill embeddings for physically simulated characters. 📄".

  • The paper shared the most on social media this week is by a team at Microsoft: "Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems" by Paul Smolensky et al (May 2022) with 204 shares. @LentoBio (Juan Valle Lisboa) tweeted "Interesting. These efforts should merge with those that use other tools say those from and the experimental approaches like the one used by to see how the brain does it. We will have a better Cognitive Science and AI will be applied CogSci".

  • The most influential Twitter user discussing papers is AK who shared "PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining" by Yuting Gao et al (Apr 2022) and said: "PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining abs: PyramidCLIP only trained for 8 epochs using 128M image-text pairs are very close to that of CLIP trained for 32 epochs using 400M training data".

This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 259 new papers.

  • The paper discussed most in the news over the past week was by a team at Idiap Research Institute: "Are GAN-based Morphs Threatening Face Recognition?" by Eklavya Sarkar et al (May 2022), which was referenced 13 times, including in the article Perceptron: AI bias can arise from annotation instructions in TechCrunch. The paper got social media traction with 6 shares. A user, @summarizedml, tweeted "A dataset and code for four types of morphing attacks, including those that use StyleGAN 2 to generate synthetic morphs, which are 📄".

  • Leading researcher Dhruv Batra (Georgia Institute of Technology) came out with "Episodic Memory Question Answering" @summarizedml tweeted "We introduce a new task for answering questions in an egocentric augmented reality device that uses episodic memory to encode spatio-temporal information 📄".

  • The paper shared the most on social media this week is by a team at Google: "CoCa: Contrastive Captioners are Image-Text Foundation Models" by Jiahui Yu et al (May 2022) with 241 shares. The authors present Contrastive Captioner (CoCa), a minimalist design to pretrain an image - text encoder - decoder foundation model jointly with contrastive loss and captioning loss, thereby subsuming model capabilities from contrastive approaches like CLIP and generative methods like SimVLM. @HochreiterSepp (Sepp Hochreiter) tweeted "ArXiv Minimalist design to pretrain image-text encoder-decoder foundation model with contrastive and captioning loss. Combines contrastive (CLIP) with generative (SimVLM) models. ImageNet top-1: 86.3% zero-shot, 90.6% with frozen encoder, and SOTA 91.0%".

This week was active for "Computer Science - Computers and Society", with 42 new papers.

This week was very active for "Computer Science - Human-Computer Interaction", with 46 new papers.

This week was very active for "Computer Science - Learning", with 393 new papers.

  • The paper discussed most in the news over the past week was "OPT: Open Pre-trained Transformer Language Models" by Susan Zhang et al (May 2022), which was referenced 14 times, including in the article Meta’s Challenge to OpenAI—Give Away a Massive Language Model in Spectrum Online. The paper author, Sameer Singh, was quoted saying "Disallowing commercial access completely or putting it behind a paywall may be the only way to justify, from a business perspective, why these companies should build and release LLMs in the first place". The paper got social media traction with 902 shares. A Twitter user, @jonrjeffrey, commented "Was just thinking it would be nice if some of OpenAI's models were open. Looks like Meta AI is beating OpenAI to the punch here for opening up a 175B parameter language model for researchers", while @loretoparisi commented "OPT-175B is comparable to #GPT3 while requiring only 1/7th the carbon footprint to develop. And... it’s #opensource 💥".

  • Leading researcher Yoshua Bengio (Université de Montréal) published "A Highly Adaptive Acoustic Model for Accurate Multi-Dialect Speech Recognition".

This week was active for "Computer Science - Multiagent Systems", with 21 new papers.

Over the past week, 27 new papers were published in "Computer Science - Neural and Evolutionary Computing".

  • Leading researcher Jianfeng Gao (Microsoft) published "Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems" @LentoBio tweeted "Interesting. These efforts should merge with those that use other tools say those from and the experimental approaches like the one used by to see how the brain does it. We will have a better Cognitive Science and AI will be applied CogSci". This paper was also shared the most on social media with 204 tweets. @LentoBio (Juan Valle Lisboa) tweeted "Interesting. These efforts should merge with those that use other tools say those from and the experimental approaches like the one used by to see how the brain does it. We will have a better Cognitive Science and AI will be applied CogSci".

This week was very active for "Computer Science - Robotics", with 75 new papers.

  • The paper discussed most in the news over the past week was by a team at Google: "Google Scanned Objects: A High-Quality Dataset of 3D Scanned Household Items" by Laura Downs et al (Apr 2022), which was referenced 4 times, including in the article Radar trends to watch: May 2022 in O'Reilly Network. The paper got social media traction with 101 shares. A Twitter user, @MisterTechBlog, posted "Google Scanned Objects: Open Source collection of over one thousand 3D-Scanned Household Items", while @JeffDean posted ""Google Scanned Objects (GSO) dataset, a curated collection of over 1000 3D scanned common household items for use in the Ignition Gazebosimulators" 17 action figures, 28 bags, 254 shoes, and more!".


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.