Week Ending 2.13.2022
RESEARCH WATCH: 2.13.2022
This week was active for "Computer Science", with 1,254 new papers.
The paper discussed most in the news over the past week was "DRAWNAPART: A Device Identification Technique based on Remote GPU Fingerprinting" by Tomer Laor (Ben-Gurion University of the Negev) et al (Jan 2022), which was referenced 38 times, including in the article Your Computer Hardware May Be Leaking Data About You in Lifewire - Tech Untangled. The paper got social media traction with 106 shares. The authors report on a new technique that can significantly extend the tracking time of fingerprint - based tracking methods. On Twitter, @FobbeSean posted "New and hard to evade: Tracking browsers via unique fingerprinting of GPUs with small WebGL applications (Laor et al 2022): Includes a discussion of countermeasures, Tor Browser does not appear to be affected".
Leading researcher Yoshua Bengio (Université de Montréal) published "RECOVER: sequential model optimization platform for combination drug repurposing identifies novel synergistic compounds in vitro" @RelationRx tweeted "New results out: Great collaboration with the team at and ! In a nutshell: repeated experiments guided by sequential model optimisation quickly hone in on highly synergistic drug combinations!".
The paper shared the most on social media this week is by a team at Google: "Block-NeRF: Scalable Large Scene Neural View Synthesis" by Matthew Tancik et al (Feb 2022) with 1191 shares. @lazy_marcus (Saurabh Nair) tweeted "Just 1.5 years since the first paper, and we have a mapped SF streets".
This week was very active for "Computer Science - Artificial Intelligence", with 226 new papers.
The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "Natural Language Descriptions of Deep Visual Features" by Evan Hernandez et al (Jan 2022), which was referenced 9 times, including in the article Demystifying Machine-Learning Systems: Automatically Describing Neural Network Components in Natural Language in SciTechDaily. The paper author, Evan Hernandez, was quoted saying "In a neural network that is trained to classify images, there are going to be tons of different neurons that detect dogs. But there are lots of different types of dogs and lots of different parts of dogs. So even though ‘dog’ might be an accurate description of a lot of these neurons, it is not very informative. We want descriptions that are very specific to what that neuron is doing. This isn’t just dogs; this is the left side of ears on German shepherds". The paper got social media traction with 45 shares. A Twitter user, @cogconfluence, observed "Our #ICLR2022 work is out! 🎉 MILAN (mutual-information-guided linguistic annotation of neurons) describes in natural language what individual units in neural networks do. Paper: w/ Teona Bagashvili, Antonio Torralba, &".
Leading researcher Kyunghyun Cho (New York University) came out with "Generative multitask learning mitigates target-causing confounding" @summarizedml tweeted "A simple and scalable approach to causal representation learning for multitask learning, and improve robustness to prior probability shift. 📄".
The paper shared the most on social media this week is by a team at Microsoft: "Corrupted Image Modeling for Self-Supervised Visual Pre-Training" by Yuxin Fang et al (Feb 2022) with 86 shares. @ak92501 (AK) tweeted "Corrupted Image Modeling for Self-Supervised Visual Pre-Training abs: 300-epoch CIM pretrained vanilla ViT-Base/16 and ResNet-50 obtain 83.3 and 80.6 Top-1 fine-tuning accuracy on ImageNet-1K image classification respectively".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 236 new papers.
The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "Natural Language Descriptions of Deep Visual Features" by Evan Hernandez et al (Jan 2022)
Leading researcher Kyunghyun Cho (New York University) published "Characterizing and overcoming the greedy nature of learning in multi-modal deep neural networks".
The paper shared the most on social media this week is by a team at Google: "Block-NeRF: Scalable Large Scene Neural View Synthesis" by Matthew Tancik et al (Feb 2022)
This week was active for "Computer Science - Computers and Society", with 31 new papers.
The paper discussed most in the news over the past week was "Health Advertising on Facebook: Privacy & Policy Considerations" by Andrea Downing et al (Jan 2022), which was referenced 10 times, including in the article Health Sites Let Ads Track Visitors Without Telling Them in Wired News. The paper author, Andrea Matwyshyn, was quoted saying "It’s entirely expected from my perspective that findings like this keep coming up for the category that I call 'health-ish' data that does not cleanly fall under the limited privacy protections that currently exist in US laws". The paper got social media traction with 102 shares. The authors analyzed content and marketing tactics of digital medicine companies to evaluate various types of cross site tracking middleware used to extract health information from users without permission. A user, @mattsmear, tweeted "“Privacy Zuckering” happens when a user is tricked into publicly sharing more information than a user really intended to share. When... employed to elicit public data from patient populations online, one might consider the sensitivity of health data involved." 👇💣👇💣👇💣👇💣".
This week was active for "Computer Science - Human-Computer Interaction", with 33 new papers.
The paper discussed most in the news over the past week was by a team at Stanford University: "Jury Learning: Integrating Dissenting Voices into Machine Learning Models" by Mitchell L. Gordon et al (Feb 2022), which was referenced 3 times, including in the article Stanford researchers propose ‘jury learning’ as a way to mitigate bias in AI in Tech Register. The paper author, Mitchell Gordon, was quoted saying "Beyond the toxicity detection task, we use as the primary application domain in our paper (and other social computing tasks one could easily imagine, like misinformation detection), we also envision jury learning being important in other high-disagreement user-facing tasks where we’re increasingly seeing AI play a role". The paper got social media traction with 15 shares.
This week was extremely active for "Computer Science - Learning", with 550 new papers.
The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "Natural Language Descriptions of Deep Visual Features" by Evan Hernandez et al (Jan 2022)
Leading researcher Yoshua Bengio (Université de Montréal) published "RECOVER: sequential model optimization platform for combination drug repurposing identifies novel synergistic compounds in vitro" @RelationRx tweeted
The paper shared the most on social media this week is "Anticorrelated Noise Injection for Improved Generalization" by Antonio Orvieto et al (Feb 2022) with 165 shares. @rasbt (Sebastian Raschka) tweeted "Very interesting. Basically, adding (anti-)correlated* (vs just uncorrelated) noise moves gradient descent to wider minima, which helps with generalization. *anticorrelated = consecutive perturbations are perfectly anticorrelated".
The most influential Twitter user discussing papers is AK who shared "Practical Imitation Learning in the Real World via Task Consistency Loss" by Mohi Khansari et al (Feb 2022) and said: "Practical Imitation Learning in the Real World via Task Consistency Loss abs: achieve 80% success across ten seen and unseen scenes using only ∼16.2 hours of teleoperated demonstrations in sim and real".
Over the past week, 17 new papers were published in "Computer Science - Multiagent Systems".
Over the past week, 27 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was by a team at University of California San Diego: "A Systematic Exploration of Reservoir Computing for Forecasting Complex Spatiotemporal Dynamics" by Jason A. Platt et al (Jan 2022), which was referenced 2 times, including in the article Data Driven Modeling of Complex Systems in Towards Data Science. The paper was shared 1 time in social media.
The paper shared the most on social media this week is by a team at Google: "EvoJAX: Hardware-Accelerated Neuroevolution" by Yujin Tang et al (Feb 2022) with 215 shares. @arankomatsuzaki (Aran Komatsuzaki) tweeted "EvoJAX: Hardware-Accelerated Neuroevolution Presents EvoJAX, a neuroevolution toolkit that can find solutions to wide range of tasks within minutes on a single accelerator, compared to hours or days when using CPUs. repo: abs".
The most influential Twitter user discussing papers is AK who shared "Practical Imitation Learning in the Real World via Task Consistency Loss" by Mohi Khansari et al (Feb 2022) and said: "Practical Imitation Learning in the Real World via Task Consistency Loss abs: achieve 80% success across ten seen and unseen scenes using only ∼16.2 hours of teleoperated demonstrations in sim and real".
This week was active for "Computer Science - Robotics", with 57 new papers.
The paper discussed most in the news over the past week was "Is it personal? The impact of personally relevant robotic failures (PeRFs) on humans trust, likeability, and willingness to use the robot" by Romi Gideoni et al (Jan 2022), which was referenced 2 times, including in the article Study examines the effects of personally relevant robotic failures on users' perceptions of collaborative robots in Tech Xplore. The paper author, Romi Gideoni, was quoted saying "In total, 132 participants engaged with a robot in person during a collaborative task of laundry sorting". The paper was shared 1 time in social media.
Leading researcher Pieter Abbeel (UC Berkeley) published "Bingham Policy Parameterization for 3D Rotations in Reinforcement Learning". This paper was also shared the most on social media with 57 tweets.