in ,

Google details RigL algorithm for building more efficient neural networks

Google details RigL algorithm for building more efficient neural networks

Google LLC today detailed RigL, an algorithm developed by its researchers that makes artificial intelligence models more hardware-efficient by shrinking them.

Neural networks are made up of so-called artificial neurons, individual mathematical operations implemented in code, that are linked together by software connections. These connections are what enable the artificial neurons to pass data to each other for processing. RigL makes AI software more efficient by fixing a common optimization issue in machine learning models: they often have more connections between neurons than what they strictly need.

The connections in an AI model effectively serve as data pathways and the data that the model processes usually only passes through a subset of those pathways. The others are left unused, unnecessarily taking up processor and memory resources. According to Google, RigL removes redundant connections by making strategic tweaks to a neural network’s structure during the training phase of development.

Google researchers put RigL to the test in an experiment involving an image processing model. It was given the task of analyzing images containing different characters. During the model training phase, RigL determined that the AI only needs to analyze the character in the foreground of each image and can skip processing the background pixels, which don’t contain any useful information. The algorithm then removed connections used for processing background pixels and added new, more efficient ones in their places.

“The algorithm identifies which neurons should be active during training, which helps the optimization process to utilize the most relevant connections and results in better sparse solutions,” explained Google research engineers Utku Evci and Pablo Samuel Castro in a blog post. “At regularly spaced intervals we remove a fraction of the connections.”

There are other methods besides RigL that attempt to compress neural networks by removing redundant connections. However, those methods have the downside of significantly reducing the compressed model’s accuracy, which limits their practical application. Google says RigL achieves higher accuracy than three of the most sophisticated alternative techniques while also “consistently requiring fewer FLOPs (and memory footprint) than the other methods.”

In one test, Google researchers used RigL to delete 80% of the connections in the popular ResNet-50 model. The resulting neural network achieved accuracy comparable to that of the original. In another experiment, researchers shrunk ResNet-50 by 99% and still saw a top accuracy of 70.55%.

“RigL is useful in three different scenarios: Improving the accuracy of sparse models intended for deployment … improving the accuracy of large sparse models that can only be trained for a limited number of iterations [and] combining with sparse primitives to enable training of extremely large sparse models which otherwise would not be possible,” Google’s Utku Evci and Pablo Samuel Castro detailed.

Source: siliconangle.com

What do you think?

48 points
Upvote Downvote

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Can robots write? Machine learning produces dazzling results, but some assembly is still required

Can robots write? Machine learning produces dazzling results, but some assembly is still required

New DeepMind Approach ‘Bootstraps’ Self-Supervised Learning of Image Representations

New DeepMind Approach ‘Bootstraps’ Self-Supervised Learning of Image Representations