in ,

Amazon Introduces ResNeSt: Strong, Split-Attention Networks

Amazon Introduces ResNeSt: Strong, Split-Attention Networks

The ResNet (residual neural network) neural network debuted in 2015, and quickly proved itself — winning the prestigious CVPR 2016 Best Paper Award. ResNet also took first place on three tasks in the ImageNet competition and aced the detection and segmentation tasks in the COCO competition. Over the past four years, the ResNet paper has been cited over 40,000 times, and many variations of the network have appeared.

The latest ResNet improvement comes courtesy researchers from Amazon and UC Davis, who this week unveiled their Split-Attention Networks, ResNeSt. The new network inherits ResNet’s concise and universal features and shows significant performance improvement without a large increase in the number of parameters, surpassing previous models such as ResNeXt and SEnet.

“Although image classification models continue to evolve, most downstream applications such as object detection and semantic segmentation are still using ResNet variants as the backbone network because of its simple and modular structure.”

Hang Zhang, an Applied Scientist at Amazon Lab 126 and the paper’s first author, says classification networks are usually the core component of downstream applications. While many recent classification network designs do not retain ResNet’s basic modular design, ResNet is still widely used for research on existing mainstream applications. The ResNeSt variant is therefore designed to be directly applied to such mainstream models.

Screen Shot 2020-04-24 at 9.50.02 AM.png

In the paper, researchers propose a modular Split-Attention block that can distribute attention to several feature-map groups. The Split-Attention block is a computational unit composed of the feature-map group and split attention operations. By stacking those Split-Attention blocks in the style of ResNet, researchers were able to produce this new variant. ResNeSt maintains the overall ResNet structure and can be used directly for downstream tasks, without adding additional computational effort.

Screen Shot 2020-04-24 at 9.59.28 AM.png
Overview of ResNeSt, SE-Net and SK-Net.

In experiments, ResNeSt outperformed other networks with similar model complexity, while its image classification performance on ImageNet easily surpassed SKNet, SENet, ResNetXt, and ResNet.

ResNeSt-50 achieves a top-1 accuracy of 81.13 percent on ImageNet, which is 1 percent higher than the previous SOTA ResNet variant. This improvement is especially meaningful for downstream tasks such as object detection and semantic segmentation. Also, in the object detection task, if researchers replace the ResNet-50 backbone network with ResNeSt-50, the ResNeSt backbone network can improve the mAP (mean Average Precision) of the model on Faster-RCNN and CascadeRCNN by about 3 percent compared with the standard ResNet baselines.

Screen Shot 2020-04-24 at 9.57.18 AM.png

The paper also introduces a number of training strategies which have great reference value for the current work of general AI practitioners. For more detailed information please check out the GitHub project page.

The paper ResNeSt: Split-Attention Networks is on arXiv.

Author: Herin Zhao | Editor: Michael Sarazen


What do you think?

48 points
Upvote Downvote

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Comet AI nabs $4.5M for more efficient machine learning model management

Comet AI nabs $4.5M for more efficient machine learning model management

NVIDIA wants you to 'GET AMPED' for Ampere GPU reveal on May 14

NVIDIA wants you to ‘GET AMPED’ for Ampere GPU reveal on May 14