Nvidia leverages AWS marketplace to advance GPU deployment

Nvidia today announced it is making 21 tools for building applications on its graphical processor units (GPUs) available on the Amazon Marketplace. This is part of a larger effort to streamline the process of embedding AI capabilities into apps.

The relationship between Nvidia and AWS is becoming more complicated. In addition to agreeing to make rival GPUs from Intel available as a cloud service earlier this month, AWS also signaled its intention to build its own GPUs.

Already available on Nvidia GPU Cloud (GPC), these tools are encapsulated as Docker containers that can be deployed anywhere, including the GPU cloud service based on Nvidia processors that AWS makes available.

Nvidia claims these components have already been downloaded more than a million times by 250,000 developers and data scientists.

The goal is to make it as simple as possible to build, train, and deploy AI applications on Nvidia processors, which will soon include Arm processors it is gaining via a previously announced acquisition expected to close next year.

Nvidia GPUs are mainly employed to more cost-effectively train AI models using GPUs that are usually accessed via the cloud. The inference engine AI models run on are most commonly deployed on x86 processors. Nvidia has been making a case for also using either lower GPUs or Arm processors to run those AI inference engines.

Regardless of the processor type, it should be feasible to deploy Nvidia software that is encapsulated in containers. Those tools span everything from instances of MXNet, TensorFlow, Nvidia Triton Inference Server, and PyTorch software to frameworks for video analytics and software development kits made up of multiple compilers and libraries.

Naturally, competition is fierce as cloud service providers battle for the hearts and minds of the developers building these applications, given the amount of compute resources they consume. As competition drives down the cost of accessing those resources, the rate at which AI applications are being built and deployed should accelerate.

But the real challenge is not so much accessing the tools and compute resources needed to build these applications as it is finding and retaining the data scientists and developers required to build, deploy, and maintain them.

Source: Read Full Article