Google’s latest announcement of Gemini 2.0 brought with it the announcement of several AI Agents in progress. The one the majority of us are most likely to interact with is Project Astra.
It's designed to be a versatile, helpful AI assistant for everyday tasks and is capable of understanding not only the task at hand but wider context.
In this article we’ll tell you all about what Google's Project Astra is, how to use it and if it's safe.
What is Project Astra?
Project Astra is a research project from Google. It’s focused on developing AI assistants that can process multimodal information, understand the context of their surroundings, and then respond naturally in conversation.
It’s goal is to be a superpowered AI agent that assists in a number of everyday tasks and can do more than current AI assistants.
Project Astra is designed to be ‘multimodal’, meaning it can understand and respond to different inputs including text, speech, images, and video. This allows for a more comprehensive understanding of the user's needs and context.
It also considers wider context based on activity, past interactions and more, enabling a more personalized and relevant response.
Building on the current growth of AI chatbots, Project Astra hopes to engage with users in even more intuitive and human-like conversations.
How to use Astra ?
Astra is still currently in its research phase. This means that it isn’t publicly available yet as the Google Deep Mind team is still actively refining the project. However, if you can’t wait to try out Astra you can apply to be one of Google’s ‘Trusted Testers’.
1. Visit the Project Astra website and click ‘Join the trusted tester waitlist.’
2. Fill out the Google form with your details.
3. Ensure that you read the Google Terms of Service and Privacy Policy.
4. Click submit.
5. Await Google’s response.
Is Project Astra safe?
Allowing an AI to become more and more involved in our day to day lives is an incredibly intimidating concept.
However, as this new technology can access more and more information about its users, there are understandable concerns about security, privacy, and the potential for unintended consequences.
To mitigate the risk Google must take significant steps to ensure the security of users data and be transparent with the safety protocols in place.
Read: What is Project Mariner? Google's AI Agent Can Control Your Computer
The tech giant states that in the development of these new technologies they ‘recognize the responsibility it entails, and aim to prioritize safety and security in all our efforts.’
Though they don't confirm the specifics, Google say that to ensure safe and responsible development they are taking an ‘exploratory and gradual approach’. This means conducting research on multiple prototypes, as well as iteratively implementing safety training. Before public release they are also working with trusted testers and external experts as well as performing an extensive risk assessment.