in

Is Tesla Dumping Python For This Programming Language

Is Tesla Dumping Python For This Programming Language

The only phenomena that matched the growth of artificial intelligence in recent years is that of Python. Python has become the go-to language for many organisations who are setting up data science and machine learning departments. The transition to Python was so rapid that many programming languages were thought to have gone obsolete.

However, Elon Musk, CEO of Tesla, in a series of tweets, has announced how serious Tesla is to bring in great minds together to work on their AI-related projects. He has also announced a house party invite for the AI enthusiasts to participate in the hackathon.

“Educational background is irrelevant”

Although the neural networks for computer vision models were written in Python, he added, the Tesla team would need people with excellent coding skills, especially in C and C++.

C/C++ for building self-driving cars might sound weird, but Musk’s tweet does raise some doubts regarding the hype around Python.

Our NN is initially in Python for rapid iteration, then converted to C++/C/raw metal driver code for speed (important!). Also, tons of C++/C engineers needed for vehicle control & entire rest of car. Educational background is irrelevant, but all must pass hardcore coding test.

— Elon Musk (@elonmusk) February 3, 2020

This didn’t go well with the developers who pointed out the pitfalls of infrastructure complexity.

Tesla researchers authors NNs in python land and rewrite with a bare C++ implementation when deploying. This feels like a failure of our tooling / infrastructure. https://t.co/75eSse1sEK

— Nikhil Thorat (@nsthorat) February 3, 2020

However, a tweet cannot be taken at face value. The information is often condensed, and Soumith Chintala, co-creator of PyTorch, has shed some light on what Musk really might have meant. He explained that converting to C++ doesn’t mean hand-rewrite in C++, but auto-converting to their low-level runtime.

He also added that Tesla team has its own ASIC, sensors, etc., which probably has its tooling, drivers, staged IR, compiler.

The C++ language also facilitates direct mapping of hardware features and zero-overhead abstractions based on those mappings.

The Curse of Tool Idolation

via NVIDIA

Most of the popular machine learning frameworks such as TensorFlow, Pytorch, and even CUDA rely on C ++.

As shown above, CUDA is more of a toolkit than a programming language that provides extensions to the developers who work with C/C++ to express massive amounts of parallelism and direct the compiler to the portion of the application that maps to the GPU.

Similarly, Python too is an interface that allows one to interact and leverage ML features without the need to learn the nitty-gritty of C++.

Python is used mostly as an interface. This arrangement is made so that more developers from non- coding backgrounds can come on board and build ML applications.

Python is easy to learn, and most of its popularity stems from this fact alone. However, if one tries to scratch the surface, they would find it easy to use APIs and interfaces that are shouldered by the likes of traditional C ++ language.

With optimised GPU libraries like BLAS and computer vision libraries like OpenCV. Everything that needs speed is written in C++ with Python bindings.

Unlike in C++, Python users can write a convolutional neural network from scratch under 50 lines. C++ requires the knowledge of a few intricacies, which is a big no to the newcomers. Time is critical here. For example, a physicist who is incorporating ML tools would prefer something lightweight and straightforward as Python. However, C++ does all the heavy lifting (read matrix multiplication) in the background of the libraries and frameworks.

According to the PyTorch team, C++ in the front end enables research in environments in which Python cannot be used, or is not the right tool for the job. The advantages were summarised as follows:

  • If one wants to do reinforcement learning research in a pure C++ game engine with high frames-per-second and low latency requirements, using a pure C++ library is a much better fit to such an environment than a Python library.
  • Due to the Global Interpreter Lock (GIL), Python cannot run more than one system thread at a time. Multiprocessing is an alternative, but not as scalable and has significant shortcomings. C++ has no such constraints and threads that are easy to use and create.
  • C++ at frontend will allow the user to remain in C++, eliminating the need to switch back and forth between Python and C++ during training.

However, Python still may not be tractable for research work such as reinforcement learning projects because of the slowness of the Python interpreter. Therefore a C++ library would be the right fit.

Whereas, in the case of TensorFlow, for the most part, is a combination of highly-optimised C++ and CUDA. In turn, by using Eigen (a high-performance C++ and CUDA numerical library) and NVidia’s cuDNN optimised deep neural network library for functions such as convolutions).

Choosing any language or tool boils down to the trade-off between ease of execution and latency. For domain experts in ML, knowledge of C++ is too much to ask. They can get going with Python while C++ developers write code to interact with the machine. This arrangement does the job quite well for many organisations. It makes sense why Tesla’s AI team needs an army of both Python and C++ developers to build the next generation autonomous products.

Source: analyticsindiamag.com

What do you think?

37 points
Upvote Downvote

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

The problem of AI explainability - can we overcome it?

The problem of AI explainability – can we overcome it?

Machine Learning… Everywhere

Machine Learning… Everywhere