Week Ending 4.18.2021
RESEARCH WATCH: 4.18.2021
This week was active for "Computer Science", with 1,391 new papers.
The paper discussed most in the news over the past week was "Variational inference with a quantum computer" by Marcello Benedetti et al (Mar 2021), which was referenced 63 times, including in the article Cambridge Quantum Computing Pioneers Quantum Machine Learning Methods for Reasoning in InsideBIGDATA. The paper author, Matthias Rosenkranz, was quoted saying "cannot offer simple explanations for their answers and struggle when asked how confident they are on certain possible outcomes". The paper got social media traction with 18 shares. A Twitter user, @rosenkranz, commented "Our new paper "Variational inference with a #quantum computer" has been out for a few days 🎉. We develop the methods, then demonstrate them using a few graphical models (e.g. hidden Markov). #QuantumComputing #MachineLearning".
Leading researcher Yoshua Bengio (Université de Montréal) published "Comparative Study of Learning Outcomes for Online Learning Platforms".
The paper shared the most on social media this week is by a team at Cornell: "GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds" by Zekun Hao et al (Apr 2021) with 414 shares.
This week was very active for "Computer Science - Artificial Intelligence", with 232 new papers.
The paper discussed most in the news over the past week was by a team at University of Oxford: "QNLP in Practice: Running Compositional Models of Meaning on a Quantum Computer" by Robin Lorenz et al (Feb 2021), which was referenced 62 times, including in the article NVIDIA Announces Technology For Training Giant Artificial Intelligence Models in Forbes.com. The paper got social media traction with 40 shares. The authors present results on the first NLP experiments conducted on Noisy Intermediate - Scale Quantum (NISQ) computers for datasets of size >= 100 sentences. A Twitter user, @Moor_Quantum, observed "Very nice work by team that advances QNLP and provides significant POC that it can be done on quantum. Covers tech problems and training issues of running a NPL model with >100 sentences on NISQ QC".
Leading researcher Yoshua Bengio (Université de Montréal) came out with "Comparative Study of Learning Outcomes for Online Learning Platforms".
The paper shared the most on social media this week is "Deep Learning-based Online Alternative Product Recommendations at Scale" by Mingming Guo et al (Apr 2021) with 120 shares. The authors use both textual product information . @manasstaneja (Manas Taneja 👨🏽💻) tweeted "Home Depot has publishing researchers???? Is this a post grad school plan I see forming 👀".
This week was very active for "Computer Science - Computer Vision and Pattern Recognition", with 309 new papers.
The paper discussed most in the news over the past week was by a team at Google: "How to represent part-whole hierarchies in a neural network" by Geoffrey Hinton (Feb 2021), which was referenced 8 times, including in the article Geoffrey Hinton has a hunch about what’s next for AI in Technology Review. The paper author, Geoffrey Hinton (Google), was quoted saying "A True researcher – Always loved Geoff for this." The paper also got the most social media traction with 1292 shares. The researchers do not describe a working system. A user, @CSProfKGD, tweeted "Back to where it all started Geoff Hinton’s first paper", while @bruceyo84343094 commented "Part-whole relationship is all you need! It reminds me of TransPose using Transformer to explain the relationships between body parts".
Leading researcher Pieter Abbeel (University of California, Berkeley) came out with "Auto-Tuned Sim-to-Real Transfer", which had 24 shares over the past 2 days. @Jack_T_Collins tweeted "Plenty of work coming out recently using an online approaches for sim2real where the method requires rollouts onto the physical robot and then improves the simulator. Very cool work".
The paper shared the most on social media this week is by a team at Cornell: "GANcraft: Unsupervised 3D Neural Rendering of Minecraft Worlds" by Zekun Hao et al (Apr 2021)
Over the past week, 28 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was "Preliminary Analysis of Potential Harms in the Luca Tracing System" by Theresa Stadler et al (Mar 2021), which was referenced 1 time, including in the article Luca App: CCC calls for an immediate moratorium in Ccc.de. The paper also got the most social media traction with 1160 shares. The investigators analyse the potential harms a large - scale deployment of the Luca system might cause to individuals, venues, and communities. On Twitter, @saschakiefer posted "Interesting and concerning read. might be worth considering when you dig into comparing check-in apps as you mentioned in your last episode", while @stephanschmidt said "Und passend dazu: "Its security concept solely relies on procedural controls and requires full trust in the Luca service operator to follow the protocols faithfully."".
Leading researcher Yoshua Bengio (Université de Montréal) published "Comparative Study of Learning Outcomes for Online Learning Platforms".
The paper shared the most on social media this week is by a team at Zhejiang University: "Cross-Partisan Discussions on YouTube: Conservatives Talk to Liberals but Liberals Dont Talk to Conservatives" by Siqi Wu et al (Apr 2021) with 188 shares. @ideafaktory (Steve Faktor) tweeted "Guessing this is because media is prominently left-leaning, so conservatives are far more used to confronting liberal views than liberals are conservative ones. Hence the frequent 'triggering' of the latter".
This week was very active for "Computer Science - Human-Computer Interaction", with 43 new papers.
The paper discussed most in the news over the past week was by a team at IBM: "Perfection Not Required? Human-AI Partnerships in Code Translation" by Justin D. Weisz et al (Apr 2021), which was referenced 1 time, including in the article Pushing the boundaries of human-AI interaction at IUI 2021 in IBM Research. The paper got social media traction with 12 shares. A user, @kr_t, tweeted "The first paper - on human-AI partnerships in code translation - studies the deployment of Generative AI models specifically TransCoder from and examines how tolerant software engineers are likely to be of imperfections in such translations".
Leading researcher Yoshua Bengio (Université de Montréal) came out with "Comparative Study of Learning Outcomes for Online Learning Platforms".
The paper shared the most on social media this week is by a team at UC Berkeley: "Auto-Tuned Sim-to-Real Transfer" by Yuqing Du et al (Apr 2021) with 52 shares. @Jack_T_Collins (Jack Collins) tweeted "Plenty of work coming out recently using an online approaches for sim2real where the method requires rollouts onto the physical robot and then improves the simulator. Very cool work".
This week was very active for "Computer Science - Learning", with 420 new papers.
The paper discussed most in the news over the past week was "Variational inference with a quantum computer" by Marcello Benedetti et al (Mar 2021)
Leading researcher Oriol Vinyals (DeepMind) came out with "Machine Translation Decoding beyond Beam Search" @timohear tweeted "Thank you, this is a treat :-) WRT to "Deep Learning doesn't do discrete": don't language models have discrete tokens on input and output? And discrete search (Beam search but also cf How is that fundamentally different from what François is proposing?".
The paper shared the most on social media this week is by a team at McGill University: "Masked Language Modeling and the Distributional Hypothesis: Order Word Matters Pre-training for Little" by Koustuv Sinha et al (Apr 2021) with 266 shares. The researchers propose a different explanation : MLMs succeed on downstream tasks almost entirely due to their ability to model higher - order word co - occurrence statistics. @kchonyc (Kyunghyun Cho) tweeted "this time not mine but fun/notorious/legendary remark that brought me to language many years ago: "Syntax is not a thing"".
Over the past week, 18 new papers were published in "Computer Science - Multiagent Systems".
Over the past week, 31 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was by a team at Drexel University: "Endurance-Aware Mapping of Spiking Neural Networks to Neuromorphic Hardware" by Twisha Titirsha et al (Mar 2021), which was referenced 2 times, including in the article eSpine: A technique to increase the usable lifetime of neuromorphic systems in Tech Xplore. The paper author, Anup Das (Drexel University), was quoted saying "Through circuit simulations at sub-micron technology nodes, we show that the memory cells in a neuromorphic system can have significant differences in write endurance". The paper was shared 1 time in social media.
This week was very active for "Computer Science - Robotics", with 67 new papers.
The paper discussed most in the news over the past week was by a team at University of California, Berkeley: "Reinforcement Learning for Robust Parameterized Locomotion Control of Bipedal Robots" by Zhongyu Li et al (Mar 2021), which was referenced 8 times, including in the article A Robot Taught Itself to Walk, Just Like a Baby in Interesting Engineering. The paper author, Zhongyu Li (Xi’an Jiaotong University), was quoted saying "These videos may lead some people to believe that this is a solved and easy problem". Chelsea Finn (Stanford University), who is not part of the study, said "Many of the videos that you see of virtual agents are not at all realistic". The paper got social media traction with 52 shares. A user, @svlevine, tweeted "A few folks pointed out to me that there are some factual errors in the MIT TR article. Certainly we are *not* claiming that our paper is the first to show RL for bipedal locomotion! Our prior work section covers lots of prior papers on this".
Leading researcher Pieter Abbeel (University of California, Berkeley) published "Auto-Tuned Sim-to-Real Transfer", which had 24 shares over the past 2 days. @Jack_T_Collins tweeted "Plenty of work coming out recently using an online approaches for sim2real where the method requires rollouts onto the physical robot and then improves the simulator. Very cool work".
The paper shared the most on social media this week is by a team at Carnegie Mellon University: "BARF: Bundle-Adjusting Neural Radiance Fields" by Chen-Hsuan Lin et al (Apr 2021) with 212 shares. The authors propose Bundle - Adjusting Neural Radiance Fields (BARF) for training NeRF from imperfect (or even unknown) camera poses -- the joint problem of learning neural 3D representations and registering camera frames. @zsr5 (Zach 🇺🇸) tweeted "Amazing work! Keep the acronym, it’s catchy hahaha".