Week Ending 10.25.2020
RESEARCH WATCH: 10.25.2020
This week was very active for "Computer Science - Artificial Intelligence", with 221 new papers.
The paper discussed most in the news over the past week was "Its Not Just Size That Matters: Small Language Models Are Also Few-Shot Learners" by Timo Schick et al (Sep 2020), which was referenced 8 times, including in the article BERT, GPT-x, and XLNet: AE, AR, and the Best of Both Worlds in Medium.com. Anna Rogers (University of Massachusetts Lowell), who is not part of the study, said "More data & compute = SOTA". The paper got social media traction with 359 shares. The researchers show that performance similar to GPT-3 can be obtained with language models whose parameter count is several orders of magnitude smaller. A Twitter user, @timo_schick, said "🎉 New paper 🎉 We show that language models are few-shot learners even if they have far less than 175B parameters. Our method performs similar to GPT-3 on SuperGLUE after training on 32 examples with just 0.1% of its parameter count: #NLProc".
The paper shared the most on social media this week is by a team at University of North Carolina at Chapel Hill: "Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision" by Hao Tan et al (Oct 2020) with 200 shares. @Thom_Wolf (Thomas Wolf) tweeted "This is a really cool piece of work! The first time I see an {image+text} BERT-model outperform BERT on common text-only tasks".
This week was active for "Computer Science - Computer Vision and Pattern Recognition", with 289 new papers.
The paper discussed most in the news over the past week was by a team at Microsoft: "VIVO: Surpassing Human Performance in Novel Object Captioning with Visual Vocabulary Pre-Training" by Xiaowei Hu et al (Sep 2020), which was referenced 12 times, including in the article Microsoft builds image-to-caption AI so that your visually impaired coworkers can truly comprehend your boss's PowerPoint abominations in The Register. The paper author, Lijuan Wang (Microsoft), was quoted saying "The nocaps challenge is really how are you able to describe those novel objects that you haven’t seen in your training data?". The paper got social media traction with 13 shares. The researchers present VIsual VOcabulary pre - training (VIVO) that performs pre - training in the absence of caption annotations.
Leading researcher Yoshua Bengio (Université de Montréal) published "Cross-Modal Information Maximization for Medical Imaging: CMIM".
The paper shared the most on social media this week is by a team at University of Michigan: "Castle in the Sky: Dynamic Sky Replacement and Harmonization in Videos" by Zhengxia Zou (Oct 2020) with 276 shares. The investigators propose a vision - based method for video sky replacement and harmonization, which can automatically generate realistic and dramatic sky backgrounds in videos with controllable styles. @popular_ML (Popular ML resources) tweeted "The most popular ArXiv tweet in the last 24h".
This week was active for "Computer Science - Computers and Society", with 34 new papers.
The paper discussed most in the news over the past week was "We Dont Speak the Same Language: Interpreting Polarization through Machine Translation" by Ashiqur R. KhudaBukhsh et al (Oct 2020), which was referenced 6 times, including in the article Even Our Language Is Polarized in Mirage News. The paper author, Mark S. Kamlet (University Professor of Economics and Public Policy), was quoted saying "Some of these so-called misaligned pairs seem pretty obvious". The paper got social media traction with 20 shares. On Twitter, @hrksrkr commented "Research suggests that polarization in the political sphere has become so extreme that supporters literally express the same sentiments in different languages. One of the lead authors is my brother who just completed his bachelor's CS".
The paper shared the most on social media this week is by a team at Universidade Federal de Minas Gerais: "Does Platform Migration Compromise Content Moderation? Evidence from r/The_Donald and r/Incels" by Manoel Horta Ribeiro et al (Oct 2020) with 56 shares. @manoelribeiro (Manoel) tweeted "When toxic web communities get banned, they don’t disappear... Rather, they migrate to a new platform! In that context, it is worth asking: -> Should platforms ban these communities? Our new pre-print tackles this question 📜: 1/".
This week was active for "Computer Science - Human-Computer Interaction", with 33 new papers.
The paper discussed most in the news over the past week was "Towards Hardware-Agnostic Gaze-Trackers" by Jatin Sharma et al (Oct 2020), which was referenced 1 time, including in the article Microsoft researchers develop assistive eye-tracking AI that works on any device in Venturebeat. The paper got social media traction with 9 shares.
This week was extremely active for "Computer Science - Learning", with 631 new papers.
The paper discussed most in the news over the past week was by a team at Microsoft: "VIVO: Surpassing Human Performance in Novel Object Captioning with Visual Vocabulary Pre-Training" by Xiaowei Hu et al (Sep 2020)
Leading researcher Yoshua Bengio (Université de Montréal) came out with "Predicting Infectiousness for Proactive Contact Tracing".
The paper shared the most on social media this week is by a team at Ludwig-Maximilians-University Munich: "Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges" by Christoph Molnar et al (Oct 2020) with 263 shares. @popular_ML (Popular ML resources) tweeted "The most popular ArXiv tweet in the last 24h".
This week was active for "Computer Science - Multiagent Systems", with 25 new papers.
Leading researcher Yoshua Bengio (Université de Montréal) published "Predicting Infectiousness for Proactive Contact Tracing".
Over the past week, 28 new papers were published in "Computer Science - Neural and Evolutionary Computing".
The paper discussed most in the news over the past week was "Pruning Neural Networks at Initialization: Why are We Missing the Mark?" by Jonathan Frankle et al (Sep 2020), which was referenced 1 time, including in the article Why training neural networks comes with a hefty price tag in The Next Web. The paper got social media traction with 186 shares. A user, @roydanroy, tweeted "New work with colleagues at and on pruning at initialization. Key experiments shed light on what’s missing from current methods. This is apparently a hot topic as we’ve learned several other teams were hot on the trail".
This week was extremely active for "Computer Science - Robotics", with 112 new papers.
The paper discussed most in the news over the past week was "Design and Development of a Gecko-Adhesive Gripper for the Astrobee Free-Flying Robot" by A. Cauligi et al (Sep 2020), which was referenced 2 times, including in the article A gecko-adhesive gripper for the Astrobee free-flying robot in Tech Xplore. The paper author, Abhishek Cauligi, was quoted saying "In addition to investigating the potential of the gecko-adhesive technology itself, we now plan to run experiments that explore the trajectory generation aspect of the problem for how Astrobee gets from point A to point B, using novel tools from the fields of optimization and machine learning".
The paper shared the most on social media this week is by a team at Stanford University: "Batch Exploration with Examples for Scalable Robotic Reinforcement Learning" by Annie S. Chen et al (Oct 2020) with 56 shares. @hammadxhammad (ham(mad)) tweeted "worked on something similar in uni for my robotics coursework. The thymio went crazy and decided to jump over the table. The university made me pay for the damages to the robot. It’s not my fault the robot’s actions were an accurate depiction of my mental state at that time".