Nvidia Chat with RTX: An on-device AI LLM trained on your PC data

Published on:

 

  • Nvidia Chat with RTX is an on-device large language model that uses the information on your PC to fulfil your requests. 
  • It can also be used on data on the internet, such as web pages.
  •  While the model is capable, people suggest that its abilities are greatly exaggerated.

Nvidia is stepping up its involvement in the AI revolution by moving beyond its traditional role and establishing itself as a key player. Following a developer conference filled with significant announcements, Nvidia has introduced a new product for testing: Nvidia Chat with RTX.

This on-device large language model operates entirely on your PC, using local information to respond to requests. Similar to Samsung’s Galaxy AI, launched with the Galaxy S24 series, which handled most AI tasks on-device, Nvidia Chat with RTX brings this capability to PCs, potentially reshaping our approach to AI, Transformers, and large language models (LLMs).

Chat with RTX

Chat with RTX is an AI chatbot on your PC. Installed as legacy software, it provides a chatbot that knows all the data on your PC. So, whether you’re trying to recall something from the October 2022 month-end report or where you made dinner reservations on the night of February 17, 2019, if the information is on your PC, Chat with RTX will be able to bring it up.

Advantages of On-device AI

Speed: One of the greatest advantages of Chat with RTX is the speed of processing. Doing all processing on-device means the speed of your internet connection becomes irrelevant. You don’t even need one! This is a major step forward compared to existing digital or AI assistants.

Privacy: Processing all device information also has a great advantage in terms of privacy. All information is processed and kept on-device, giving the owner full control and custody. There is no third party that can look through, collate, mine, sell, or leak your data to anyone else, intentionally or otherwise.

Security: The custody of the data also brings about advantages in security. Without the need to log in somewhere or having to send the information, you ultimately increase security. The information never has to travel or be stored elsewhere, even temporarily.

Customization: Another clear advantage is customization. Chat with RTX works based on the information stored on your computer, essentially acting as an LLM trained on your data and information. Whether the bulk of your information is documents, still images, or video, IT is a chatbot that is specific to you and your data.

Flexibility: Chat with RTX isn’t limited to the data on your device; it can also be used on data on the internet, such as webpages. So, you can ask Chat with RTX to dig for some information on a webpage, and it will provide results, all without compromising on security or privacy aspects. It’s also worth noting this is just the beginning.

Devil in the Details

The system requirements for running Chat with RTX are on the demand side. Currently, it is only available for Windows PCs, leaving MacOS and Linux out in the cold for now. It is only available for PCs running Windows 11.

A minimum of 16GB of RAM and a file size of 35GB mean that you need quite a powerful system to use it. In addition to all this, the system also requires an Nvidia GeForce RTX 30 or 40-series GPU to work well on a machine. The system requirements are a little on the heavy side but still within reach of many mid-range PCs and laptops.

You can download Nvidia Chat with RTX here

Mixed Reviews

Chat with RTX was first made available for testing on February 19, 2024. Its reviews differ somewhat from the demos and communication coming out of Nvidia. While the model is capable, people suggest that its abilities are greatly exaggerated. The large file size and hefty requirements are not matched by output.

It can do many tasks, but it doesn’t do them well. This is understandable as a first shot at a new idea. As these things go, we should expect improvements in the future. The hulking size and heavy system requirements are two areas they should work on while targeting performance improvements as well.

While proponents have started talking up Chat with RTX as the beginning of the end for Chat GPT and Google’s Gemini, the experiences of those who have used it suggest that we are quite some time from this being the case. LLMs and transformers require a large amount of processing power and training to go with it.

Expecting a local install on a PC to deliver just as well as the data centres behind your favourite generative AI is asking for too much too soon. However, for enterprises, this does hold some promise that can be delivered soon.

A company of a reasonable size could power its own LLM system for its workforce. This would have vast applications but clearly, the benefits to workflow in information-heavy industries are many.

 The advantages of customization, privacy and security would be highly valuable to enterprise users. So while the idea of an LLM on your PC isn’t quite ready to compete with the best in class in the immediate term, we could see personalised LLMs for enterprise users in the short to medium term.

The age of on-device AI

The availability of Nvidia’s Chat with RTX signals that we are moving towards the age of on-device AI. Samsung S24 line up with Galaxy AI processed the majority of the tasks on-device.

Now Nvidia gives us completely on-device processing. The product is according to some reviews not good enough yet. That said very few first-time products are. Nvidia has certainly thrown the cat amongst the pigeons with Chat with RTX.

Newsletter

Related

Kudzai G Changunda
Kudzai G Changundahttp://www.about.me/kgchangunda
Finance guy with a considerable interest in the adoption of web 3.0 technologies in the financial landscape. Both technology and regulation focused but, of course, people first.