Week Ending 2.20.2022
RESEARCH WATCH: 2.20.2022
SPONSORED BY
This week was active for "Computer Science", with 1,278 new papers.
The paper discussed most in the news over the past week was by a team at Cambridge Research Laboratory: "Paving the Way towards 800 Gbps Quantum-Secured Optical Channel Deployment in Mission-Critical Environments" by Farzam Toudeh-Fallah et al (Feb 2022), which was referenced 83 times, including in the article JPMorgan, Toshiba, Ciena find new way to protect blockchain with quantum network in Reuters. The paper author, Marco Pistoia (IBM), was quoted saying "Security is paramount for JPMorgan Chase". The paper got social media traction with 6 shares.
Leading researcher Oriol Vinyals (DeepMind) published "General-purpose, long-context autoregressive modeling with Perceiver AR" @FinSentim tweeted "This work develops Perceiver AR, an autoregressive, modality-agnostic architecture which uses cross-attention to map long-range inputs to a small number of latents while also maintaining end-to-end causal masking. Perceiver AR can attend to over a hundred thousand tokens. #ML".
The paper shared the most on social media this week is by a team at University of Oxford: "Gradients without Backpropagation" by Atılım Güneş Baydin et al (Feb 2022) with 465 shares. @_arohan_ (Rohan Anil) tweeted "In distributed training one can sum a bunch of forward gradients, does that become more accurate, while exploiting parallelism 🤔".
This week was very active for "Computer Science - Artificial Intelligence", with 218 new papers.
The paper discussed most in the news over the past week was "Compute Trends Across Three Eras of Machine Learning" by Jaime Sevilla et al (Feb 2022), which was referenced 10 times, including in the article Artificial intelligence may already be ‘slightly conscious’, AI scientists warn in The Independent. The paper author, Tamay Besiroglu, was quoted saying "Seeing so many prominent machine learning folks ridiculing this idea is disappointing". The paper got social media traction with 354 shares. The investigators study trends in the most readily quantified factor - compute. A user, @TShevlane, tweeted "Remember the year 2010? We now have AI systems that take roughly 10 billion times more compute to train than back then. Seems like an important shift!", while @ohlennart commented "Compared to AI and Compute we find a slower, but still tremendous, doubling rate of 6 months instead of their 3.4 months. We analyze this difference in Appendix E 5/".
Leading researcher Oriol Vinyals (DeepMind) published "General-purpose, long-context autoregressive modeling with Perceiver AR" @
The paper shared the most on social media this week is by a team at Google: "Transformer Memory as a Differentiable Search Index" by Yi Tay et al (Feb 2022) with 370 shares. The investigators demonstrate that information retrieval can be accomplished with a single Transformer, in which all information about the corpus is encoded in the parameters of the model. @henlojseam (jseam.eth (🍊, 🍊) 🦇🔊) tweeted "Using a transformer as a caching strategy 🤔🤔🤔🤔🤔🤔".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 257 new papers.
The paper discussed most in the news over the past week was by a team at University of Michigan: "Self-supervised Learning from 100 Million Medical Images" by Florin C. Ghesu et al (Jan 2022), which was referenced 2 times, including in the article Quobyte Storage For Biomedical Applications And Cloudian With The Weka AI Data Platform in Forbes.com. The paper got social media traction with 22 shares. On Twitter, @Hornegger observed "Congratulations to #FAUalum Florin Ghesu and team to this outstanding paper. With 100,000,000 images you open up a new dimension in medical imaging, while others still struggle with small data sets".
Leading researcher Oriol Vinyals (DeepMind) published "General-purpose, long-context autoregressive modeling with Perceiver AR" @
The paper shared the most on social media this week is by a team at Yonsei University: "How Do Vision Transformers Work?" by Namuk Park et al (Feb 2022) with 227 shares. @HochreiterSepp (Sepp Hochreiter) tweeted "ArXiv Multi-head self-attention (MSA) modules 1) flatten the loss landscapes 2) are low-pass filters (Convs -> high-pass), 3) are individual models in multi-stages. New architecture: replacing Conv blocks by MSA modules at end of stages outperforms CNNs".
This week was active for "Computer Science - Computers and Society", with 36 new papers.
The paper discussed most in the news over the past week was "Compute Trends Across Three Eras of Machine Learning" by Jaime Sevilla et al (Feb 2022)
The paper shared the most on social media this week is "Measuring Trustworthiness or Automating Physiognomy? A Comment on Safra, Chevallier, Gr\`ezes, and Baumard (2020)" by Rory W Spanton et al (Feb 2022) with 313 shares. @bradpwyble (Brad Wyble) tweeted "Great to see this out. There are quite a few things wrong with this study but mostly we should stop trying to predict how people think and feel based on the shape of their head. It is 2022 and we're better than this (I hope)".
This week was very active for "Computer Science - Human-Computer Interaction", with 39 new papers.
The paper discussed most in the news over the past week was "InfraredTags: Embedding Invisible AR Markers and Barcodes Using Low-Cost, Infrared-Based 3D Printing and Imaging Tools" by Mustafa Doga Dogan (MIT CSAIL, Cambridge, MA, USA) et al (Feb 2022), which was referenced 1 time, including in the article 3D Printing News Briefs, February 19, 2022: Metal 3D Printing, Research, & More in 3DPrint.com. The paper got social media traction with 5 shares. A user, @arXiv_reaDer, tweeted "InfraredTags: Embedding Invisible AR Markers and Barcodes Using Low-Cost, Infrared-Based 3D Pri ... InfraredTags:低コストの赤外線ベースの3D印刷およびイメージングツールを使用した、目に見えないARマーカーとバーコードの埋め込み arXiv".
This week was extremely active for "Computer Science - Learning", with 505 new papers.
The paper discussed most in the news over the past week was "Compute Trends Across Three Eras of Machine Learning" by Jaime Sevilla et al (Feb 2022)
Leading researcher Oriol Vinyals (DeepMind) published "General-purpose, long-context autoregressive modeling with Perceiver AR" @
The paper shared the most on social media this week is by a team at University of Oxford: "Gradients without Backpropagation" by Atılım Güneş Baydin et al (Feb 2022)
Over the past week, ten new papers were published in "Computer Science - Multiagent Systems".
Over the past week, 21 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper shared the most on social media this week is by a team at University College London: "Testing the Tools of Systems Neuroscience on Artificial Neural Networks" by Grace W. Lindsay (Feb 2022) with 82 shares. The investigators argue that these tools should be explicitly tested and that artificial neural networks (ANNs) are an appropriate testing grounds for them. @KordingLab (KordingLab 👨💻🧠∇🔬📈,🏋️♂️⛷️🏂🛹🕺⛰️☕🦖) tweeted "We need to test the methods we use in neuroscience. Microprocessors. Or as proposes neural networks. But we need to test them!".
This week was active for "Computer Science - Robotics", with 62 new papers.
The paper discussed most in the news over the past week was "Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driving Systems" by Andrea Stocco et al (Dec 2021), which was referenced 3 times, including in the article Training autonomous vehicles requires more than simulation in Phone Week. The paper was shared 2 times in social media.