Software
Google Launches Android XR Platform on Samsung's Project Moohan
2024-12-12
Google is set to make a significant leap in the realm of augmented and virtual reality with the launch of a new Android-based XR platform. This move is aimed at accommodating the growing demand for AI features and providing a unified ecosystem for app development across different devices.

Revolutionize Your XR Experience with Google's Android XR

Android XR: The Foundation for Immersive Experiences

Google has announced the launch of Android XR, a platform that will serve as the backbone for immersive experiences. It will support app development on a wide range of devices, including headsets and glasses, opening up new possibilities for developers and users alike. The first developer preview is set to be released on Thursday, bringing with it existing tools like ARCore, Android Studio, Jetpack Compose, Unity, and OpenXR.This platform holds the potential to transform the way we interact with digital content. It allows for seamless transitions between a fully immersive experience and the augmentation of real-world surroundings. Users will be able to control the device using Gemini and seek information about the apps and content they are engaging with.

The Smasung-Built Project Moohan Headset

Android XR will initially launch with the Smasung-built Project Moohan headset. Scheduled for availability next year, this headset is set to be a game-changer in the XR space. However, the launch has faced some delays, with reports suggesting that it was originally supposed to be shipped earlier this year. Despite the setback, Google remains confident in its potential.The headset's ability to easily switch between immersive and augmented modes is a key feature. It offers users a unique and flexible experience, allowing them to seamlessly integrate digital elements into their real-world environments.

App Ecosystem and Gemini

Since Android XR is based on Android, most mobile and tablet apps on the Play Store will be compatible with it. This means that users purchasing an Android headset will have access to a vast library of apps through the Android XR Play Store. This is a significant advantage over other XR platforms that may have limited app availability.Google's strategy with Android XR is also to counter Apple's Vision Pro. While Vision Pro had a limited number of apps at launch and its high cost has been a deterrent for many users, Android XR aims to provide a more accessible and app-rich experience.Google is redesigning various platforms like YouTube, Google TV, Chrome, Maps, and Google Photos to enhance the immersive screen experience. It has also taken steps to ensure better app access for users.The company is adding an Android XR Emulator to Android Studio, enabling developers to visualize their apps in a virtual environment. This emulator features XR controls for using a keyboard and a mouse to emulate navigation in a spatial environment, making development more intuitive.Furthermore, Google is pushing Gemini for Android XR, providing additional screen control and contextual information. It will also support the Circle to search feature, enhancing the overall user experience.

Support for Other Devices

Google hopes to expand the reach of Android XR to glasses with "all-day help" in the future. Prototype glasses are already being seeded to some users, although a specific consumer launch date has not been specified.Demos have shown the potential of Android XR on other devices. For example, a person can ask Gemini to summarize a group chat and get recommendations for buying a card. Another demo demonstrated using glasses to ask for ways to hang shelves.Companies like Lynx, Sony, and XReal, which utilize Qualcomm's XR solutions, will be able to launch more devices with Android XR. Google will also continue to work with Magic Leap on XR, although it remains unclear if Magic Leap will adopt Android XR.Google's past attempts at AR and VR with Project Tango, Daydream, and Cardboard VR have laid the groundwork for its current Android XR effort. The company is now focused on building a sustainable ecosystem that will attract both hardware makers and software developers.
Anthropic's 3.5 Haiku Model Now Available in Claude
2024-12-12
Anthropic, a prominent name in the AI realm, has recently made a significant move by releasing one of its latest AI models, Claude 3.5 Haiku. This development has sparked considerable interest among users of its AI chatbot platform, Claude. The news of 3.5 Haiku's launch began circulating on social media on Thursday morning, and TechCrunch was able to verify independently that the model is now accessible via the web and mobile applications.

Unlock the Potential of Claude 3.5 Haiku: A Game-Changer in AI

Availability and Accessibility

Claude Haiku 3.5 is now finally available on both web and mobile apps. This widespread accessibility allows users to engage with the advanced capabilities of the model from the convenience of their preferred devices. It marks a significant step forward in making cutting-edge AI technology more accessible to a wider audience.

The fact that it is available on multiple platforms ensures that users can seamlessly integrate it into their daily workflows, whether they are working on a desktop computer or on the go using a mobile device. This flexibility enhances the user experience and opens up new possibilities for leveraging AI in various contexts.

Performance and Specializations

3.5 Haiku, which was unveiled by Anthropic in November, has demonstrated remarkable performance. It matches or even surpasses the performance of Anthropic's outgoing flagship model, 3 Opus, on specific benchmarks. This indicates that the new model brings enhanced capabilities and efficiency to the table.

Anthropic emphasizes that 3.5 Haiku is particularly well-suited for tasks such as coding recommendations, data extraction and labeling, and content moderation. These are crucial areas where the model's capabilities can have a significant impact, helping users streamline their processes and achieve better results.

Text Output and Knowledge Cutoff

One of the notable features of 3.5 Haiku is its ability to output longer chunks of text compared to its predecessor, 3 Haiku. This allows for more detailed and comprehensive responses, providing users with a deeper understanding of the topics at hand.

The model also has an updated knowledge cutoff, meaning it can reference more recent events. This is essential in a rapidly evolving world where up-to-date information is crucial. With 3.5 Haiku, users can rely on the model to provide them with the latest insights and knowledge.

Image Analysis Limitation

However, it's important to note that this model does not support image analysis. While it excels in other areas such as text-based tasks, it falls short in one key aspect compared to Anthropic's other available models, 3 Haiku and 3.5 Sonnet. This limitation should be considered when choosing the appropriate model for a particular task.

Despite this limitation, 3.5 Haiku still offers a wide range of valuable features and capabilities that make it a valuable addition to Anthropic's AI portfolio. Users can leverage its strengths in text-related tasks while being aware of its limitations in image analysis.

Controversy and API Cost

3.5 Haiku became the subject of minor controversy when it was introduced into Anthropic's API early last month. Initially, Anthropic suggested that the cost of using 3.5 Haiku would be the same as 3 Haiku. However, they later changed their stance, arguing that the model's increased "intelligence" warranted a higher API cost.

This controversy highlights the importance of clear communication and transparency regarding pricing and model capabilities. It also shows the complexity of balancing the value and cost of AI models in the market. Users need to carefully consider these factors when deciding whether to adopt 3.5 Haiku or other available models.

See More
Google's Project Astra AR Glasses: A Future Vision, Not Today's Reality
2024-12-12
Google is on a journey to bring augmented reality and multimodal AI capabilities to glasses. However, the details of their plans remain somewhat hazy. Multiple demos of Project Astra have been seen, and now the company is set to release prototype glasses for real-world testing. This marks an important step in the evolution of computing.

Unlock the Future with Google's AR and AI Glasses

Project Astra: The Foundation of Google's Glasses

DeepMind's Project Astra is at the heart of Google's vision for glasses. It aims to build real-time, multimodal apps and agents with AI. Multiple demos have shown its potential running on prototype glasses. These glasses are armed with AI and AR capabilities, ready to transform the way we interact with technology.Google's decision to release these prototype glasses to a small set of selected users is a significant move. It shows their commitment to pushing the boundaries of what's possible in vision-based computing. The company is now allowing hardware makers and developers to build various glasses, headsets, and experiences around Android XR, their new operating system.

The Coolness and Vaporware Aspect

The prototype glasses seem incredibly cool, but it's important to note that they are essentially vaporware at this stage. Google still has no concrete details about the actual product or its release date. Despite this, the company clearly has ambitions to launch these glasses at some point, referring to them as the "next generation of computing" in a press release.Today, Google is focused on building out Project Astra and Android XR to make these glasses a reality. They have shared new demos showcasing how the prototype glasses can use AR technology to perform useful tasks like translating posters, remembering things around the house, and reading texts without reaching for the phone.

Google's Vision for AR and AI Glasses

Android XR will support glasses for all-day help in the future. Google envisions a world of stylish, comfortable glasses that users will love to wear every day and that seamlessly integrate with other Android devices. These glasses will put the power of Gemini at users' fingertips, providing helpful information right when needed, such as directions, translations, or message summaries. It's all within reach, either in the line of sight or directly in the ear.Many tech companies have shared similar visions for AR glasses, but Google seems to have an edge with Project Astra. It is launching the app to beta testers soon, giving a glimpse of what's to come. I had the opportunity to try out the multimodal AI agent as a phone app this week and was impressed by its capabilities.Walking around a library on Google's campus and using the agent by pointing a phone camera at objects and talking to it, I witnessed real-time processing of voice and video. The agent could answer questions about what I was seeing and provide summaries of authors and books.Project Astra works by streaming pictures of the surroundings and processing voice simultaneously. Google DeepMind ensures that no user data is used for training the models, but the AI remembers the surroundings and conversations for 10 minutes, allowing it to refer back to previous information.Some members also demonstrated how Astra can read phone screens, similar to understanding what's through a camera. It can summarize Airbnb listings, show nearby destinations using Google Maps, and execute Google Searches based on what's on the phone screen.Using Project Astra on a phone is impressive, and it indicates the potential of AI apps. OpenAI has also demoed GPT-4o's vision capabilities, similar to Project Astra and set to release soon. These apps have the potential to make AI assistants more useful by extending beyond text chatting.It's clear that the AI model would be ideal on a pair of glasses, and Google seems to share this vision. But it may take some time to make this a reality.
See More