
It is designed to make artificial intelligence more personal, more helpful, and more connected to a user’s daily digital life.
The announcement was made on January 14, 2026. The feature links Gemini with Google apps such as Gmail, Google Photos, YouTube, and Search.
Google describes the idea behind it by saying, “the best assistants don’t just know the world; they know you and help you navigate it.” The feature is currently rolling out in beta in the United States.
It is part of Google’s move to shift AI from a general assistant to a more personalised digital companion.
Early users are already seeing how it works in real situations. Josh Woodward, Vice President of Google Labs, Gemini and AI Studio, shared an example from his own experience.
He said, “I asked Gemini… it suggested different options: one for daily driving and another for all-weather conditions.”
In the same moment, the system went deeper. It pulled family travel memories from Google Photos. It also retrieved a license plate number from an image.
He added, “Just like that, we were set.” Users are also applying it in planning and daily decisions.
Woodward explained, “It’s been exceptional for planning our upcoming spring break.”
The system looks at past trips, interests, and habits.
It then suggests travel ideas that feel more relevant and personal. It can also recommend activities based on previous experiences, making planning smoother and more tailored.
The feature works in a simple way. Users connect their Google apps with a single tap. Once connected, Gemini can access information across those apps.
It reads emails in Gmail, analyses photos in Google Photos, and uses YouTube and Search for context.
Woodward said, “You can now personalise Gemini by connecting Google apps with a single tap.” After this, the assistant combines information from different sources to respond more intelligently.
Google has stressed that privacy is a key focus. “Connecting your apps is off by default. You choose to turn it on,” the company stated.
Users control which apps are connected and can disconnect them at any time. Google also clarified that personal data is not directly used to train AI models.
It is only used to respond to user requests, keeping control with the user. However, the company admits the system is still improving. “We’ve tested this beta version extensively, but we haven’t eliminated mistakes,” Google noted.
Sometimes the AI may make wrong assumptions or over-personalise responses. Users are encouraged to correct it.
Woodward added, “If Gemini gets this wrong, you can just tell it.” The feature is currently available to Google AI Pro and AI Ultra subscribers in the U.S.
It can be enabled in settings under Personal Intelligence by selecting connected apps. Google says it works across the web, Android, and iOS, with a wider rollout expected in the future.
Comments 0
Sign in to join the conversation
Sign In Create AccountNo comments yet. Be the first to share your thoughts!