Published on November 17, 2023, 6:55 am

Microsoft And Nvidia Empower Local Ai With New Tools And Updates

Microsoft and Nvidia are making significant strides in the world of Artificial Intelligence (AI), particularly with the rising popularity of generative AI. Traditionally, this technology heavily relied on cloud servers, but now both companies are introducing tools to reduce users’ dependency on remote AI systems.

At Ignite 2023, Microsoft and Nvidia unveiled exciting developments aimed at assisting users in developing and running generative AI applications locally. Leveraging Windows 11’s increased focus on AI and incorporating renowned AI models from Microsoft, Meta, and OpenAI, these new software offerings are set to pave the way for more accessible AI capabilities.

One such tool is Microsoft’s Windows AI Studio, which consolidates various models and development tools from catalogs like Azure AI Studio and Hugging Face. With configuration interfaces, walkthroughs, and other instruments at developers’ disposal, Windows AI Studio aims to facilitate the building and refinement of small language models. Initially released as a VS code extension in the upcoming weeks, this workflow will enable local AI workloads using hardware such as Neural Processing Units.

On the other hand, Nvidia is introducing a major update to TensorRT-LLM that promises expanded and accelerated AI applications on Windows 11 systems. Notably, this update allows users to keep their data within local systems instead of relying on cloud servers. This enhancement addresses security concerns by providing greater control over data privacy. TensorRT-LLM updates will be compatible with laptops, desktops, and workstations equipped with GeForce RTX graphics cards along with a minimum of 8GB VRAM.

Among the improvements is a wrapper feature that ensures compatibility between TensorRT-LLM and OpenAI’s Chat API. Additionally, version 0.6.0 of TensorRT-LLM will deliver five times faster AI inference operations while supporting new large language models like Mistral 7B and Nemotron-3 8B on RTX 3000 or 4000 GPUs with at least 8GB of memory.

Nvidia will soon release this update on its GitHub repository and make the latest optimized AI models accessible at ngc.nvidia.com. Furthermore, those interested in the upcoming AI Workbench model customization toolkit can now join the early access list.

In other news, Microsoft has integrated Bing’s AI-powered chatbot into the Copilot brand. Users opening the Bing chat window in Edge or the new Copilot assistant in Windows 11 will now see “Copilot with Bing Chat.” Initially appearing as a chatbot within Edge, Bing Chat’s functionality was later integrated into the Copilot assistant introduced with the recent Windows 11 23H2 update. Unifying these features under one name solidifies Microsoft’s response to ChatGPT.

With these advancements from Microsoft and Nvidia, users can look forward to enhanced local AI capabilities and increased control over their data. The integration of renowned AI models and widespread accessibility across devices further fuels the progress of generative AI.

Share.

Comments are closed.