Week Ending 3.13.2022
RESEARCH WATCH: 3.13.2022
SPONSORED BY
This week was very active for "Computer Science - Artificial Intelligence", with 218 new papers.
The paper discussed most in the news over the past week was by a team at Microsoft: "Singularity: Planet-Scale, Preemptive and Elastic Scheduling of AI Workloads" by Dharma Shukla (Microsoft) et al (Feb 2022), which was referenced 23 times, including in the article What does Microsoft want to achieve with Singularity? in Analytics India Magazine. The paper author, Mark Zuckerberg, was quoted saying "within our lifetimes." The paper got social media traction with 145 shares. A user, @CKsTechNews, tweeted "#Microsoft details 'planet-scale' AI infrastructure packing 100k-plus #GPUs Microsoft with the big mouth, less talk, more doing friends. Press Paper", while @PaperTldr said "π87% Scheduling high utilization across deep learning and inference workloads is a crucial lever for cloud providers to train and deliver highly-efficient distributed service to their clients".
Leading researcher Abhinav Gupta (Carnegie Mellon University) published "The Unsurprising Effectiveness of Pre-Trained Vision Models for Control" The authors revisit and study the role of pre - trained visual representations for control, and in particular representations trained on large - scale computer vision datasets. @arankomatsuzaki tweeted "The (Un)Surprising Effectiveness of Pre-Trained Vision Models for Control Finds that pre-trained visual reps can be competitive or even better than ground-truth state representations to train control policies. proj: abs".
The paper shared the most on social media this week is by a team at Idiap Research Institute: "HyperMixer: An MLP-based Green AI Alternative to Transformers" by Florian Mai et al (Mar 2022) with 104 shares. @summarizedml (SummarizedML) tweeted "HyperMixer is a simple MLP-based architecture for token mixing, and it performs better than Transformers. π".
The most influential Twitter user discussing papers is AK who shared "DiT: Self-supervised Pre-training for Document Image Transformer" by Junlong Li et al (Mar 2022) and said: "DiT: Self-supervised Pre-training for Document Image Transformer abs: achieves sota results on downstream tasks, e.g. document image classification (91.11 β 92.69), document layout analysis (91.0 β 94.9) and table detection (94.23 β 96.55)".
This week was very active for "Computer Science - Computer Vision and Pattern Recognition", with 408 new papers.
The paper discussed most in the news over the past week was by a team at Microsoft: "Spatial Computing and Intuitive Interaction: Bringing Mixed Reality and Robotics Together" by Jeffrey Delmerico et al (Feb 2022), which was referenced 3 times, including in the article Mixed Reality Could Turn Robots Into Extensions of You in Lifewire - Tech Untangled. The paper got social media traction with 17 shares. On Twitter, @summarizedml commented "Spatial computing -- the ability of devices to be aware of their surroundings and to represent this digitally -- offers novel capabilities in human-robot π".
Leading researcher Luc Van Gool (Computer Vision Laboratory) came out with "ZippyPoint: Fast Interest Point Detection, Description, and Matching through Mixed Precision Discretization" The investigators investigate the adaptations neural networks for detection and description require in order to enable their use in embedded platforms. @summarizedml tweeted "We investigate and adapt network quantization techniques for use in real-time applications. π".
The paper shared the most on social media this week is by a team at Google: "Kubric: A scalable dataset generator" by Klaus Greff (Derek) et al (Mar 2022) with 410 shares. @HochreiterSepp (Sepp Hochreiter) tweeted "ArXiv Dataset generator for photo-realistic scenes with rich annotations. Scales well and produces huge datasets. Examples include 3D NeRF models and optical flow estimation. Pre-comuted datasets and code available".
The most influential Twitter user discussing papers is Bitsbetrippin who shared "Two Attacks On Proof-of-Stake GHOST/Ethereum" by Joachim Neu et al (Mar 2022) and said: "Bottom line ... looks like #eth PoS Merge going to be delayed a bit longer. Unless this gets squashed pretty quickly. #ethereum #eth Good find on Good on you all for keeping the community informed".
Over the past week, 21 new papers were published in "Computer Science - Computers and Society".
The paper discussed most in the news over the past week was "Compute Trends Across Three Eras of Machine Learning" by Jaime Sevilla et al (Feb 2022), which was referenced 11 times, including in the article University Researchers Investigate Machine Learning Compute Trends in InfoQ. The paper author, Tamay Besiroglu, was quoted saying "Seeing so many prominent machine learning folks ridiculing this idea is disappointing". The paper also got the most social media traction with 463 shares. The researchers study trends in the most readily quantified factor - compute. A user, @TShevlane, tweeted "Remember the year 2010? We now have AI systems that take roughly 10 billion times more compute to train than back then. Seems like an important shift!", while @ohlennart observed "Compared to AI and Compute we find a slower, but still tremendous, doubling rate of 6 months instead of their 3.4 months. We analyze this difference in Appendix E 5/".
This week was active for "Computer Science - Human-Computer Interaction", with 34 new papers.
The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "Human Detection of Political Deepfakes across Transcripts, Audio, and Video" by Matthew Groh et al (Feb 2022), which was referenced 4 times, including in the article Deepfakes study finds doctored text is more manipulative than phony video in The Next Web. The paper got social media traction with 23 shares. The authors evaluate how communication modalities influence peoples ability to discern real political speeches from fabrications based on a randomized experiment with 5,727 participants who provide 61,792 truth discernment judgments. A user, @piyush_s_mishra, tweeted "This is interesting. 1/n MIT recruited ~500 people to see how well they could identify deepfakes displayed on an MIT-created public website", while @arunasank said "π¨New arXiv pre-print alertπ¨ Excited to share a new pre-print on differences in deepfakes discernment by modality. I learnt a ton from when working on this project, and parts of it featured in my thesis!β¨".
This week was very active for "Computer Science - Learning", with 415 new papers.
The paper discussed most in the news over the past week was "Compute Trends Across Three Eras of Machine Learning" by Jaime Sevilla et al (Feb 2022)
Leading researcher Luc Van Gool (Computer Vision Laboratory) published "ZippyPoint: Fast Interest Point Detection, Description, and Matching through Mixed Precision Discretization" The authors investigate the adaptations neural networks for detection and description require in order to enable their use in embedded platforms. @summarizedml tweeted "We investigate and adapt network quantization techniques for use in real-time applications. π".
The paper shared the most on social media this week is by a team at OpenAI: "Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer" by Greg Yang et al (Mar 2022) with 451 shares. @arankomatsuzaki (Aran Komatsuzaki) tweeted "Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer By transferring from 40M parameters, µTransfer outperforms the 6.7B GPT-3, with tuning cost only 7% of total pretraining cost. abs: repo".
The most influential Twitter user discussing papers is Dave Aitel who shared "SyzScope: Revealing High-Risk Security Impacts of Fuzzer-Exposed Bugs in Linux kernel" by Xiaochen Zou et al (Nov 2021) and said: "Some good work here".
This week was active for "Computer Science - Multiagent Systems", with 23 new papers.
The paper discussed most in the news over the past week was by a team at OpenAI: "AutoDIME: Automatic Design of Interesting Multi-Agent Environments" by Ingmar Kanitscheider et al (Mar 2022), which was referenced 1 time, including in the article OpenAIβs AutoDIME: Automating Multi-Agent Environment Design for RL Agents in SyncedReview.com. The paper got social media traction with 8 shares. A Twitter user, @summarizedml, observed "We examine a set of intrinsic teacher rewards derived from predictionproblems that can be applied in multi-agent settings. π".
Over the past week, 16 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was by a team at Massachusetts Institute of Technology: "Biological error correction codes generate fault-tolerant neural networks" by Alexander Zlokapa et al (Feb 2022), which was referenced 1 time, including in the article ML & Neuroscience: February 2022 must-reads in Towards Data Science. The paper got social media traction with 62 shares. A user, @AdamMarblestone, tweeted "From fellow and 1st year graduate student Alex Zlokapa et al. Alex also had one of the best COVID predictors last year", while @jbimaknee commented "A few years ago, we published a formal analysis of dentate gyrus coding schemes, linking grid coding, continual neurogenesis, and error correction".
Leading researcher Jianfeng Gao (Microsoft) came out with "Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer" @arankomatsuzaki tweeted "Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer By transferring from 40M parameters, µTransfer outperforms the 6.7B GPT-3, with tuning cost only 7% of total pretraining cost. abs: repo". This paper was also shared the most on social media with 451 tweets. @arankomatsuzaki (Aran Komatsuzaki) tweeted "Tensor Programs V: Tuning Large Neural Networks via Zero-Shot Hyperparameter Transfer By transferring from 40M parameters, µTransfer outperforms the 6.7B GPT-3, with tuning cost only 7% of total pretraining cost. abs: repo".
This week was extremely active for "Computer Science - Robotics", with 124 new papers.
The paper discussed most in the news over the past week was by a team at UC Berkeley: "ViKiNG: Vision-Based Kilometer-Scale Navigation with Geographic Hints" by Dhruv Shah et al (Feb 2022), which was referenced 4 times, including in the article Geographic Hints Help a Simple Robot Navigate for Kilometers in Spectrum Online. The paper author, Sergey Levine (University of California, Berkeley), was quoted saying "But autonomous driving or other tasks with higher stakes (or even real sidewalk delivery that has to deal with dense traffic) has to have additional mechanisms to deal with safety and constraints, which the current approach doesn't directly handle just yet". The paper got social media traction with 29 shares. The researchers propose a learning - based approach that integrates learning and planning, and can utilize side information such as schematic roadmaps, satellite maps and GPS coordinates as a planning heuristic, without relying on them being accurate. A Twitter user, @hzarkoob, said "Vi-King or We-King. I vote for the former. The latter reminds one of the monarchs. It should have been stopped earlier".
Leading researcher Luc Van Gool (Computer Vision Laboratory) published "ZippyPoint: Fast Interest Point Detection, Description, and Matching through Mixed Precision Discretization"