We select and review products independently. When you purchase through our links we may earn a commission. Learn more.

Everything Announced at Google’s I/O 2023 Keynote

Predictably, AI was the star of the show.

Google IO icon on a sign
Justin Duino / Review Geek

As expected, Google used this year’s I/O developer keynote to cement itself as a longstanding leader in artificial intelligence. The majority of Wednesday’s I/O 2023 keynote was spent discussing AI-enabled features, new AI models, and how to responsibly deploy AI applications.

But we also got some actual hardware, thankfully. Google launched the Pixel 7a and opened pre-orders for both its Pixel Fold and Pixel Tablet. It also gave us a glance at some features included in the upcoming Android 14 update. Note that you can watch the full conference on YouTube.

An All-New Foldable: Google Pixel Fold

Back of the Google Pixel Fold and its rear cameras
Justin Duino / Review Geek

After several years of leaks and rumors, the Pixel Fold was finally revealed during Google’s I/O 2023 conference. It features a large 5.8-inch cover display, plus a 7.6-inch foldable display. The camera array is quite respectable, and it utilizes Tensor G2.

Google’s “Fluid Friction” hinge design goes completely flat, according to Google, and it’s supposedly the most durable hinge of any foldable. Pixel Fold is also the thinnest foldable on the market, and the thinnest phone developed by Google (when unfolded, of course).

Early rumors suggested that the Pixel Fold would not have a visible crease on its foldable screen. Unsurprisingly, live demos from I/O 2023 show that this is untrue. The crease is visible.

As for actual durability, we’re looking at an IPX8 water-resistance rating. That means Google hasn’t certified any dust resistance for this device, which is a shame, as foldable phones are notoriously vulnerable to sand, dirt, and other small particles.

Quite naturally, Google has made several improvements to Android specifically for the foldable form-factor. Dual-screen activities allow for advanced multitasking—you can lock two apps into dual-screen mode so they’ll always open correctly, you can drag images or text between dual-screen apps, and so on. Essentially, Google copied Samsung’s homework and built it right into the Android OS.

Pre-orders for the Pixel Fold start today for $1,800. Orders begin shipping in June, which comes as a bit of a surprise, as we expected a fall release. Anyway, when you pre-order, you get a free Pixel watch.

Google Pixel Fold (256 GB)

Pre-order the new Google Pixel Fold today for $1,799 and get a free Pixel Watch. The fancy new device starts shipping at the end of June.

The Budget Monster: Google Pixel 7a

The Google Pixel 7a sitting in some grass.
Andrew Heinzman / Review Geek

Last year’s Google Pixel 6a was fairly different from Google’s mainline Pixel 6. But the story is completely different this year—Google upgraded the Pixel 7a with a 90Hz display, wireless charging, and an all-new camera array. Plus, it costs more money, clocking in at $500 instead of the previous $450.

In other words, the Pixel 7a is extremely similar to the Pixel 7. We believe that customers will have a hard time choosing between these phones, as the difference in price is just $100 (and the Pixel 7 regularly goes on sale for about $550).

We’ve already written a full, detailed review of the Pixel 7a. And our feelings about the phone are a bit complicated. Check out our review for more information, or read our less-opinionated spec evaluation to learn everything you need to know about the Pixel 7a.

The Pixel 7a is now available for $500. It ships with a pair of Pixel Buds A-series if you order it early enough.

Google Pixel 7a

The Google Pixel 7a offers wireless charging, a 90Hz display, and flagship-level cameras for $100 less than the standard Pixel 7.

A Portable Nest Hub: Google Pixel Tablet

Pixel Tablet on a table showing its home screen
Justin Duino / Review Geek

We finally got the full details behind Google’s Pixel Tablet, which was initially revealed during last year’s I/O 2022 conference. It features an 11-inch display and is one of the few notable Android tablets in recent memory.

The whole idea behind this tablet is that it can replace the Nest Hub. Basically, you stick the Pixel Tablet on a docking station (which doubles as a speaker), and it becomes a smart home hub. It offers the same photo slideshow home screen that you get on the Nest Hub, and you can use the fingerprint sensor to bring up traditional Android apps while the tablet is docked. (Note that Amazon began offering these features on some of its Fire HD tablets a few years ago.)

Of course, Pixel Tablet will leverage the redesigned Google Home app, which makes it easier to access smart home controls and see live feeds from compatible smart cameras. It should, in theory, make for a more robust smart home controller than the Nest Hub.

Google also notes that the Pixel Tablet has Chromecast built-in, so you can cast video to it from your phone (similar to what you might do with a Nest Hub). And the Google TV app is getting an update specifically for improved tablet compatibility.

The Pixel Tablet costs $500 and is available for pre-order today. It starts shipping next month, and it comes bundled with the charging speaker dock. Interestingly, Google is also selling a protective case with an integrated kickstand, and you don’t need to remove the case to place the tablet on the dock.


Google Pixel Tablet

Pre-order the Pixel Tablet today for a new AI-powered smart home controller and media player.

A More Customizable, Personalized Android Experience

Sameer Samat on stage at Google IO 2023 discussing Android
Justin Duino / Review Geek

It seems that Android 14 will include some AI-enabled features. The headliner here is generative AI in the Google Messages app—Bard can rewrite any of your messages in a variety of styles, including several professional styles, some styles based on emotions, or funny things, like a Shakespearian style.

Android users can also Create a Wallpaper using natural language prompts. Basically, you open the wallpaper picker, select “Create a Wallpaper,” and type out what you want. Android will let you swipe through several images and refine your search using style prompts. (Google says Create a Wallpaper is coming this fall, so yeah, Android 14.)

Additionally, Google showed off Emoji Wallpapers, which let you choose from a selection of emoji in several patterns and colors. And a new Cinematic Wallpaper effect turns your photos into “3D images” where the subject floats above the background. These two features don’t appear to be AI-driven.

I should note that Microsoft’s SwiftKey keyboard beta currently offers Bing AI integration, so Google isn’t the first one to have this idea. And most of Google’s wallpaper ideas are plainly inspired by the latest iOS updates.

Find My Device

Sameer Samat on stage at Google IO 2023 discussing Find My Device on Android
Justin Duino / Review Geek

Speaking of iOS, Google is building its own Find My Device network. This is something we heard about through a leak, and unfortunately, Google’s keynote didn’t give us a ton of new information.

Find My Device integrates with several tracking services, including Tile. And it works a lot like Apple’s Find My network. If someone who participates in this network is near a tracking device that you’ve lost, it will pinpoint that tracker’s location on a map and share the details with you.

Google says that Find My Device is privacy-focused. Location information is encrypted, so even Google can’t see where your trackers are located. And if someone tries to stalk you using a tracker, you will receive an Unknown Tracker Alert.

Notably, Google is working with Apple to ensure that Unknown Tracker Alerts work on both Android and iOS. The company says that it’ll launch Find My Device later this summer.

The Bard AI and PaLM 2

Sundar Pichai on stage at Google IO 2023 announcing PaLM 2 Models Gecko, Otter, Bison, and Unicorn
Justin Duino / Review Geek

The waitlist for Google’s Bard AI has dropped, though advanced integrations with Google Search and other services is still a few months out. In any case, Google shared a lot more information about this AI, including many of its features.

Bard runs on Google’s PaLM 2 AI language model, which is meant to rival ChatGPT in its functionality, versatility, and global usability (it supports over 100 languages). You can have conversations with the Bard AI, or use it to generate text for productivity purposes. It can also identify images, interestingly, and Google is working with Adobe to integrate its Firefly image generator with Bard.

Most of Google’s demonstrations of Bard focus on how it will integrate with other tools, such as Google Search. Users will see a massive Bard AI panel added to the top of Search results. This panel will include a natural language response to your queries, plus any other relevant information, such as websites you may want to visit, products you may want to buy, and sources for where information was gathered.

Bard will also feed into Google Search’s existing image-identification feature. If you want additional context for an image (like a picture of the Pope wearing a puffer jacket), you can press a three-dot icon and select “About This Image” to see what it is and where it’s been shared.


And, in one notable example, Google showed how Bard will work with Maps. Let’s say that you ask Bard for good colleges in your area of interest—it will spit out some results, which you can ask to see highlighted in Google Maps. I imagine that this feature will be especially useful to tourists.

Google Search integration for Bard will come later this year. But you can test the feature early by joining Google’s Labs waitlist. Just press the Labs icon (a flask with blue liquid) in the Google Search mobile app or on your Chrome desktop.

For any of the developers reading this article, Google also noted that PaLM 2 is available in several large and small models (some intended for on-device operation, others meant for servers). The company will also offer Vertex AI, a set of pre-trained models for developers and corporations.

There are currently three Vertex AI models—an image-generator called Imagen, a coding assistant called Codey, and a speech model called Chirp.

Developers can fine-tune these models for their purposes. And for intensive or niche applications, Vertex AI models can be trained on domain-specific data. Corporations can take the Imagen model and customize it for their advertising team, for example, or deploy Codey for the IT department.

Finally, Google will offer A3 Virtual Machines (based on NVIDIA’s H100 GPUs) for corporations that need to run personalized or custom AI processes at scale.

New AI Capabilities for Existing Apps

Google Photos

One of the things that Google keeps emphasizing is that the Bard AI will give you a “head start” or a “starting point” for professional, creative, or personal work. And this is most evident in Bard’s integration with existing apps.

“Help Me Write,” a feature that Google’s already been testing in Docs and Gmail, is a good example of this philosophy. Basically, you put in a short prompt to get a useful and organized output—Bard can create the basic draft for your writing, or it can make suggestions of how to improve your writing (or finish writing something, if you’re stuck on an email or a story). Of course, Bard can also spit out a complete piece of work, but Google is rightfully positioning this as a tool for work, rather than a replacement for work.

Bard will also find its way to Slides, Sheets, and other Google-owned apps. And the idea is pretty simple. If you need speaker notes for your slideshow, or require unique AI-generated images, Bard will get the job done for you. More interestingly, Bard can create spreadsheets using external data or the data provided by you. And you can see the sources for any outside info gathered by Bard.

If productivity isn’t your thing (it sure ain’t mine), Google Photos now offers a Magic Editor tool. It allows you to remove subjects and objects from photos (like the existing Magic Eraser), but it also lets you reposition a subject in an image—the AI will fill in the background, and it can even move the shadows casted by your subject.

Magic Editor also offers some traditional editing features, such as color correction. Subject masking also is also an option—you can select a portion of your image to edit how it looks. A notable example is sky replacement, which is exactly what it sounds like.

Google Maps is also gaining an AI-powered Immersive View for Routes. Essentially, you can see a 3D model of routes before you take them. This may be useful to cyclists, tourists, or anyone who just likes to wander around.

Project Tailwind

Google demonstrating Project Tailwind.
Google, Review Geek

The educational system is in for a treat (or a curse, depending on how things pan out). Google showed off Project Tailwind during I/O 2023, and it seems like a very interesting tool for students and teachers.

As described by Google, Project Tailwind is an “AI-powered notebook that helps you learn faster.” It’s really a PaLM 2 model that you refine using your own data. A teacher can dump their learning materials into thee AI and ask it to pump out a syllabus, a glossary for important terms, or other items that would usually take a few hours to put together.

Students can interact with Tailwind to ask questions, look up information, or create an AI notebook based on their own notes. And, interestingly, Tailwind cites its sources. It can also show when a piece of information is verified by the teacher’s learning materials, which is quite useful.

Google says that Tailwind will be useful outside of schools. It could help writers keep up with their research, or pump out useful information for a lawyer’s current case.

Tools to Evaluate Online Information

Google demonstrating how it will reduce image-based misinformation.
Google, Review Geek

During the I/O 2023 keynote, Google kept mentioning the word “responsibility.” Obviously, people are very concerned about how AI will be used for malicious purposes, as we already have plenty of misinformation floating around the internet.

So, Google will provide tools to help people identify AI-generated images. And these tools are pretty rudimentary—all images generated through Google services will contain metadata that indicates their source. And Google Search will note when an image contains AI metadata, though this metadata needs to be voluntarily applied by the image uploader or the tool used for generation.

Users can also press the three-dot menu to bring up an “About This Image” panel, which provides context for an image and shows where it’s been shared. And Lens will gain this exact feature at some point.

Again, these are rudimentary steps to fight misinformation in the AI era. Removing metadata from an image isn’t a particularly difficult task. But hey, at least Google is acknowledging the problem.

Andrew Heinzman Andrew Heinzman
Andrew is the News Editor for Review Geek, where he covers breaking stories and manages the news team. He joined Life Savvy Media as a freelance writer in 2018 and has experience in a number of topics, including mobile hardware, audio, and IoT. Read Full Bio »