Eye On AI

View Original

Week Ending 3.7.2021

RESEARCH WATCH: 3.7.2021

This week was active for "Computer Science", with 1,092 new papers.

  • The paper discussed most in the news over the past week was by a team at University of Oxford: "QNLP in Practice: Running Compositional Models of Meaning on a Quantum Computer" by Robin Lorenz et al (Feb 2021), which was referenced 61 times, including in the article Cambridge Quantum Announce Largest Ever Natural Language Processing Implementation on a Quantum Computer in Canada NewsWire. The paper got social media traction with 37 shares. The researchers present results on the first NLP experiments conducted on Noisy Intermediate - Scale Quantum (NISQ) computers for datasets of size >= 100 sentences. A Twitter user, @Moor_Quantum, observed "Very nice work by team that advances QNLP and provides significant POC that it can be done on quantum. Covers tech problems and training issues of running a NPL model with >100 sentences on NISQ QC".

  • Leading researcher Yoshua Bengio (Université de Montréal) published "Neural Production Systems" @bengoertzel tweeted "nice progress from Yoshua Bengio etc. on extracting symbolic rules from data using neural nets ... limited in scope/power but promising".

  • The paper shared the most on social media this week is "Neural 3D Video Synthesis" by Tianye Li et al (Mar 2021) with 275 shares. @rmbrualla (Ricardo Martin-Brualla) tweeted "Really cool work from my neighbor Mira Slavcheva!! ๐Ÿ˜ƒ The cooking dataset makes me *so hungry*! Seems like it is a pandemic-inspired dataset, like the ones we got for nerfies. Congrats y'all! #futureofcooking".

  • The most influential Twitter user discussing papers is Sebastian Raschka who shared "Generative Adversarial Transformers" by Drew A. Hudson et al (Mar 2021) and said: "GANsformer: 2 weeks later, the second methods on GANs with Transformers just went live: Wow, their generated images look better than anything I was ever able to generate with existing GANs".

This week was very active for "Computer Science - Artificial Intelligence", with 150 new papers.

  • The paper discussed most in the news over the past week was by a team at University of Oxford: "QNLP in Practice: Running Compositional Models of Meaning on a Quantum Computer" by Robin Lorenz et al (Feb 2021)

  • Leading researcher Yoshua Bengio (Université de Montréal) came out with "Neural Production Systems" @bengoertzel tweeted "nice progress from Yoshua Bengio etc. on extracting symbolic rules from data using neural nets ... limited in scope/power but promising".

  • The paper shared the most on social media this week is by a team at Stanford University: "Generative Adversarial Transformers" by Drew A. Hudson et al (Mar 2021) with 226 shares. @artsiom_s (Artsiom Sanakoyeu) tweeted "Generative Adversarial Transformers ๐Ÿ“ ๐Ÿ› ๏ธ The GANsformer leverages a bipartite structure to allow long-range interactions, while evading the quadratic complexity standard transformers suffer from. Presented 2 novel attention types".

This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 257 new papers.

  • The paper discussed most in the news over the past week was by a team at DeepMind: "High-Performance Large-Scale Image Recognition Without Normalization" by Andrew Brock et al (Feb 2021), which was referenced 9 times, including in the article Best of arXiv โ€” Readings for March 2021 in Towards Data Science. The paper got social media traction with 1085 shares. The investigators develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer - Free ResNets. A user, @sohamde_, tweeted "Releasing NFNets: SOTA on ImageNet. Without normalization layers! Code: This is the third paper in a series that began by studying the benefits of BatchNorm and ended by designing highly performant networks w/o it. A thread: 1/8".

  • The paper shared the most on social media this week is "Neural 3D Video Synthesis" by Tianye Li et al (Mar 2021)

  • The most influential Twitter user discussing papers is Sebastian Raschka who shared "Generative Adversarial Transformers" by Drew A. Hudson et al (Mar 2021)

Over the past week, 28 new papers were published in "Computer Science - Computers and Society".

  • The paper discussed most in the news over the past week was "Auditing E-Commerce Platforms for Algorithmically Curated Vaccine Misinformation" by Prerna Juneja et al (Jan 2021), which was referenced 7 times, including in the article Amazon directs customers to vaccine misinformation, study finds in EuroNews. The paper author, Prerna Juneja, was quoted saying "We found out that once users start engaging with misinformation on the platform, they are presented with more misinformation at various points in their Amazon navigation route". The paper got social media traction with 27 shares. A Twitter user, @JMarkOckerbloom, observed "Link to preprint discussed in the story: Among the findings are that misinforming books often get ranked higher than books that debunk them, and that you get notably more misinformation recommendations once you click on one of them (even if you don't buy)".

This week was active for "Computer Science - Human-Computer Interaction", with 31 new papers.

This week was very active for "Computer Science - Learning", with 373 new papers.

Over the past week, 13 new papers were published in "Computer Science - Multiagent Systems".

Over the past week, 23 new papers were published in "Computer Science - Neural and Evolutionary Computing".

  • The paper discussed most in the news over the past week was by a team at Google: "Evolving Reinforcement Learning Algorithms" by John D. Co-Reyes et al (Jan 2021), which was referenced 3 times, including in the article Implementing DQNClipped and DQNReg with Stable Baselines in Medium.com. The paper got social media traction with 81 shares. A Twitter user, @AlifePapers, said "EVOLVING REINFORCEMENT LEARNING ALGORITHMS "We propose a method for meta-learning reinforcement learning algorithms by searching over the space of computational graphs which compute the loss function for a value-based model-free RL agent to optimize"".

This week was very active for "Computer Science - Robotics", with 103 new papers.


EYE ON A.I. GETS READERS UP TO DATE ON THE LATEST FUNDING NEWS AND RELATED ISSUES. SUBSCRIBE FOR THE WEEKLY NEWSLETTER.