ARTIFICIAL INTELLIGENCE | Opera Browser Integrates Major AI Models for Local Download and Offline Use

According to Opera, this is the first time local LLMs can be easily accessed and managed from a major browser through a built-in feature.

Leading browser company, Opera, says that it’s adding experimental support for 150 local LLM (Large Language Model) variants from approximately 50 families of models to its Opera One browser in developer stream.

LLMs such as GPT (Generative Pre-trained Transformer) models developed by OpenAI, are advanced artificial intelligence systems trained on large amounts of text data to understand and generate human-like text. They are used for various natural language processing tasks, including text generation, translation, and summarization.

According to Opera, this is the first time local LLMs can be easily accessed and managed from a major browser through a built-in feature. The local AI models are a complimentary addition to Opera’s online Aria AI service. Among the supported local LLMs are:

  • Llama from Meta
  • Vicuna
  • Gemma from Google
  • Mixtral from Mistral AI

 

“Introducing Local LLMs in this way allows Opera to start exploring ways of building experiences and know how within the fast-emerging local AI space,” said Krystian Kolondra, EVP Browsers and Gaming at Opera.

 

Using Local Large Language Models means users’ data is kept locally on their device, allowing them to use generative AI without the need to send information to a server, Opera said.

Among the issues emerging in artificial intelligence discourse concerns data privacy, which has seen three leading decentralized AI projects; Fetch.ai, SingularityNET (SNET), and Ocean Protocol deciding to merge to create a decentralized AI ecosystem.

“As of today, the Opera One Developer users are getting the opportunity to select the model they want to process their input with. To test the models, they have to upgrade to the newest version of Opera Developer and follow several steps to activate the new feature,” Opera said.

“Choosing a local LLM will then download it to their machine. The local LLM, which typically requires 2-10 GB of local storage space per variant, will then be used instead of Aria, Opera’s native browser AI, until a user starts a new chat with the AI or switches Aria back on.”

 

 

 

Follow us on Twitter for latest posts and updates

Join and interact with our Telegram community

________________________________________

________________________________________