NVIDIA announced the availability of the NVIDIA Jarvis framework, providing developers with state-of-the-art pre-trained deep learning models and software tools to create interactive conversational AI services that are easily adaptable for every industry and domain.
With billions of hours of phone calls, web meetings, and streaming broadcast video content generated daily, NVIDIA Jarvis models offer highly accurate automatic speech recognition, as well as superhuman language understanding, real-time translations for multiple languages, and new text-to-speech capabilities to create expressive conversational AI agents.
Utilizing GPU acceleration, the end-to-end speech pipeline can be run in under 100 milliseconds — listening, understanding, and generating a response faster than the blink of a human eye — and can be deployed in the cloud, in the data center, or at the edge, instantly scaling to millions of users.
NVIDIA Jarvis will enable a new wave of language-based applications previously not possible, improving interactions with humans and machines. It opens the door to the creation of such services as digital nurses to help monitor patients around the clock, relieving overloaded medical staff; online assistants to understand what consumers are looking for and recommend the best products, and real-time translations to improve cross-border workplace collaboration and enable viewers to enjoy live content in their own language.
Jarvis has been built using models trained for several million GPU hours on over 1 billion pages of text, 60,000 hours of speech data, and in different languages, accents, environments, and lingos to achieve world-class accuracy. For the first time, developers can use NVIDIA TAO, a framework to train, adapt and optimize these models for any task, any industry, and on any system with ease.
Developers can select a Jarvis pre-trained model from NVIDIA’s NGC catalog, fine-tune it using their own data with the NVIDIA Transfer Learning Toolkit, optimize it for maximum throughput and minimum latency in real-time speech services, and then easily deploy the model with just a few lines of code so there is no need for deep AI expertise.
Since Jarvis’s early access program began last May, thousands of companies have asked to join. Among early users is T-Mobile, the U.S. telecom giant, which is looking to AI to further augment its machine learning products using natural language processing to provide real-time insights and recommendations.
NVIDIA is also partnering with Mozilla Common Voice, an open-source collection of voice data for startups, researchers, and developers to train voice-enabled apps, services, and devices. The world’s largest multi-language, public domain voice dataset, Common Voice contains over 9,000 total hours of contributed voice data in 60 different languages. NVIDIA is using Jarvis to develop pre-trained models with the dataset, and then offer them back to the community for free.
NVIDIA’s conversational AI tools have had more than 45,000 downloads. These can be combined with technology from hundreds of partners and support leading software libraries, allowing developers worldwide to build innovative and intuitive conversational AI applications.
Newly announced features will be released in the second quarter as part of the ongoing NVIDIA Jarvis open beta program. Developers can download it today from NGC with more information available here.