Meta has launched the first version of the Independent Meta AI application, IOS, Android, Web browsers and Ray-Ban meta smart brought a more personal and spoken AI assistant for users within glasses. Meta’s Llama 4 aims to present a more complex and relevant AI experience with built-in, applied, contextual memory and sound capabilities.
According to Meta, the new application “knows your choices, remembers the context and is personalized for you.” AI Assistant, image emergence and editing, web search and sound-based interactions, and are available along meta platforms such as WhatsApp, Instagram, Facebook and Messenger.
New features and technology
The Meta AI application offers a discovery of an exploration that users can explore how users use the assistant. This social element is designed to emphasize the best instructions that users can use as remixes and inspiration. “Nothing is not shared for your writing,” “If you do not choose to write,” Meta stressed.
Contains a voice assistant developed with full duplex speech technology, which creates direct voice output instead of reading text-based answers. Users can turn this demo feature to test or turn off the conversation flow. Although there are no web access in real-time, Meta says the demon’s voice gives the EU “an idea for the future.”
Voice conversations, including the full duplex feature, are available in the United States, Canada, Australia and New Zealand.
Integration and personalization of cross-platform
The Meta AI application is designed to work as a companion for Ray-Ban Meta glasses that replace the previous meta view companion application. Automatic users of the meta view are automatically transferred to Meta AI application devices following the settings, devices and media update.
Users can start conversations using Ray-Ban Meta glasses and continue to the Meta AI application or through meta.ai on the Internet. In both directions between the application and web interface, both directions can be selected in both directions, but from the application or on the Internet.
Meta AI shows individual assistance by drawing to the information users by choosing sharing meta products such as profiles and content signs. Users can use both platforms to create more relevant answers to the helper, if they connect via Facebook and Instagram accounts.
“To make META AI more personal, we use our decades to work on people on our platforms,” he said. Personal answers are currently available in the United States and Canada.
Web experience updates
Meta also developed a web experience for Meta AI. Users can now connect with the assistant and can access the discovery of desktop browsers. The web interface is optimized for larger screens to develop image generation features and new options, including more memory and new options to adjust the mood and style.
Meta tests users to create and edit a document that allows you to create a rich documents with text and images and export them as PDFs. In the selected countries, the company also tests the ability to import documents for Meta AI to analyze and generalize.
Ongoing development
Meta said that this launch was “the first step to build a personal AI” and stressed that he planned to develop the experience over time by expanding user feedback. The company places the app as a helper for daily use, can help brainstorm recommendations and stay related to your friends and family.
Photo: Meta