X
CNET logo Why You Can Trust CNET

Our expert, award-winning staff selects the products we cover and rigorously researches and tests our top picks. If you buy through our links, we may get a commission. Reviews ethics statement

With AI, Google Wants to Do All 'the Googling for You.' Is That a Good Thing?

Get up to speed on the rapidly evolving world of AI with our roundup of the week's developments.

Connie Guglielmo SVP, AI Edit Strategy
Connie Guglielmo is a senior vice president focused on AI edit strategy for CNET, a Red Ventures company. Previously, she was editor in chief of CNET, overseeing an award-winning team of reporters, editors and photojournalists producing original content about what's new, different and worth your attention. A veteran business-tech journalist, she's worked at MacWeek, Wired, Upside, Interactive Week, Bloomberg News and Forbes covering Apple and the big tech companies. She covets her original nail from the HP garage, a Mac the Knife mug from MacWEEK, her pre-Version 1.0 iPod, a desk chair from Next Computer and a tie-dyed BMUG T-shirt. She believes facts matter.
Expertise I've been fortunate to work my entire career in Silicon Valley, from the early days of the Mac to the boom/bust dot-com era to the current age of the internet, and interviewed notable executives including Steve Jobs. Credentials
  • Member of the board, UCLA Daily Bruin Alumni Network; advisory board, Center for Ethical Leadership in the Media
Connie Guglielmo
11 min read
A huge screen on stage at Google I/O shows multiple instances of the abbreviation "AI." Pichai stands at the far end of the stage, looking out at the audience.

AI dwarfed everything at I/O. (That's Sundar Pichai on the right.)

Glenn Chapman/AFP via Getty Images

At the end of a nearly two-hour keynote presentation at the Google I/O conference last week, CEO Sundar Pichai got the crowd laughing after using the company's AI to scan the transcript and tally how many times "AI" was mentioned. The answer: nearly 125 times.

Which is why the annual developers' gathering should have been called Google A/I.

Pichai and his team detailed how Google's generative AI engine — Gemini — is being injected into popular products and services, from Gmail, Google apps and the Android operating system to Google's market-dominant Search service. Gemini is also the foundation for new video, audio and "AI Agents." (CNET has recaps of all the product news.)

AI Atlas art badge tag

"Google is fully in our Gemini era," Pichai said at the start of the May 14 event. "We see this as how we'll make the most progress against our mission: Organizing the world's information across every input, making it accessible via any output, and combining the world's information, with the information in your [emphasis his] world, in a way that's truly useful for you." 

So, the TL;DR: Google, one of the most powerful and influential companies on the planet, is working to make sure gen AI will be adopted, to some degree, by the billions of people who use its products every day. It's an obvious strategic move, given that Google is seen as lagging behind OpenAI, maker of ChatGPT, in the gen AI race.

But is Google's approach to AI a good thing for humanity? Now's the moment we should be asking such questions of Google and the other AI makers, including OpenAI, Microsoft, Meta, Anthropic and soon Apple, which market themselves as innovators working on new ways to empower humans. 

When it comes to Google, users may love, as Pichai demoed, being able to "Ask Photos" to comb through their Google Photo libraries and return all the images over time that chart their kid's swimming progress. Or that Ask Photos will be able to tell you your car license plate, rather than you doing a keyword search and getting every photo with a plate. "It knows (emphasis mine) the cars that appear often, it triangulates which one is yours, and tells you the license plate number," Pichai explained.  

People may like having Gemini reading through their Gmail inbox and pulling out every mention of an upcoming event and summarizing the contents of related PDFs. If the event is held over Google Meet, Gemini can recap the highlights. If there's a volunteer sign-up, Pichai added, Gemini can check your calendar, let you know if you're free and draft an email RSVP for you. 

Hopefully a polite one.

Many people may also embrace AI Agents that do their "organizing, reasoning and synthesizing" — like helping you return shoes you bought online by searching your inbox for the receipt, locating the order number from your email, filling out a return form, and scheduling a UPS pickup. Or finding local service providers, from restaurants to dog walkers, when you move to a new city, and updating your address across all the sites with your personal information.

Other folks might be fascinated by Project Astra, a Gemini-based "multimodal" agent that's a step on the path to Google DeepMind's goal of creating an artificial general intelligence, or AGI, an AI that behaves more like a human. Astra can input and export text, images, audio and video (hence "multimodal") and can "see" the world around you through your smartphone camera in real time. In one demo, a person asked Astra to remind them where they left their glasses.  

"An Agent like this has to understand and respond to our complex and dynamic world like we do," said Demis Hassabis, co-founder and CEO of Google DeepMind. "It would need to take in and remember what it sees so it can understand context and take action. And it would have to be proactive, teachable and personal, so you can talk to it naturally, without lag or delay."

An Agent like that would also be needed to power AI devices, like smart glasses, CNET's Scott Stein noted. 

Businesses are also looking to Google and other AI makers for tools that can boost productivity and profit. Actor and director Donald Glover and his creative studio Gilga asked Google how AI might help with visual storytelling, and then they tested a text-to-video tool Google introduced called Veo. The Gilga team, which used Veo to make a short film, said it allowed them to "visualize things on a time scale that's 10 or a hundred times faster." 

Signup notice for AI Atlas newsletter

That means filmmakers could quickly iterate new ideas. "That's what's really cool about it," Glover said in Gilga's 90-second testimonial on YouTube. "It's like you can make a mistake faster. That's all you really want at the end of the day — at least in art — it's just to make mistakes faster."

But not everyone may be as enamored with AI, especially if you consider how much of your personal and work life (including your physical spaces) you'll need to expose to Google — even if the company says, "We take your privacy seriously."  

Still others might be concerned about turning over their "organizing, reasoning and synthesizing" to Google, because how do you know that what the company is prioritizing, summarizing, highlighting and suggesting is accurate or truly encapsulates the nuance of what's going on around you? 

That's part of the concern with AI Overviews, a new search feature that creates AI-generated summaries from sources that Google deems authoritative and trustworthy. Those overviews are presented at the top of search results, potentially training people never to actually click on the links to find out whether that summary is truly authoritative, trustworthy or even accurate. (As a Gen Zer, CNET's Katelyn Chedraoui says she prefers TikTok's search over AI Overviews.)

AI Overviews also have publishers concerned that the loss of people clicking on links will undermine the SEO traffic they rely on to earn money to pay to create all the content that Google wants to summarize for you, as CNET's Imad Khan, The New York Times and others have noted.

As for me, I'd like transparency from Google on what decision-making goes into the algorithm that prioritizes that AI-organized search results page." Search executive Liz Reid said this is all about letting "Google ... do the Googling for you." But its search ranking algorithm remains a black box, and the only mention of what's in the training data powering Gemini is that it's based on "over a trillion facts about people, places, and things."

Who's deciding what those facts are, let alone giving them the green light to be included in a model that's supposed to do "the searching, the researching, the planning, the brainstorming"?   

And Google made no mention of Gemini's hallucination rate — that is, how often it delivers answers to you that sound like they're true but in fact aren't. One hallucination leaderboard puts Gemini's hallucination rate at around 4.6% to 4.8%

My search of the I/O transcript on YouTube didn't turn up a single mention of hallucinations. 

To be sure, other makers of AI engines haven't come forward to talk about their hallucination rates and training data, or how they're handling copyright concerns, or how their large language models prioritize and assess sources of information, authoritative or not. 

But here's the thing: Google is used by billions of people. Billions. OpenAI, which last week announced a new version of its chatbot, called ChatGPT-4o, said in a video demo that over 100 million people are using its tools today.  

So, more show and less tell would go a long way to assuring us that Google, as Pichai said, is taking "a bold and responsible approach to making AI useful for everyone."  

Here are the other doings in AI worth your attention.

OpenAI's new ChatGPT-4o chatbot gets faster, chattier and is free

There were rumors that OpenAI might try to undercut Google by announcing an AI search engine a day ahead of Google I/O. Instead, as CNET's Lisa Lacy noted, it launched a new version of ChatGPT that's much faster, so that the back-and-forth you can have with it — via text, audio, images and video — seems more natural. (Support for text, audio, images and video all means its "multimodal.")

OpenAI is also making this powerful new model available free and introduced a desktop version as part of its effort to get more consumers to try it. Earlier this year, Lacy reported at the time, OpenAI dropped the requirement to sign up for accounts.    

Called ChatGPT-4o (the "o" stands for omni), it can "respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time in a conversation," according to OpenAI. And it can understand and converse in 50 languages, the company said, with one demo showing a real-time back-and-forth in Italian and English. 

It also has better "vision," which means you can show it something — a selfie, a math equation written on a piece of paper, screenshots, documents, photos — and ChatGPT-4o will recognize what you're showing it and respond appropriately.

With the new real-time conversational speech functionality, you can also interrupt the chatbot so you don't have to wait for it to finish answering before you can start speaking, Mark Chen, head of frontiers research at OpenAI, said during the product rollout. It also picks up on your emotions and projects emotions as well. I thought the chatbot, with demos of male and female voices, sounded upbeat and cheery — making it chattier than before.

If this all sounds a bit wonky, check out the demos and see/hear for yourself. In one, an OpenAI employee asks ChatGPT-4o to let him know if a joke he's working on passes as a "dad joke." After congratulating the employee on the upcoming addition to his family, the chatbot tells him to tell the joke, saying, "Lay it on me." Joke: What do you call a pile of kittens? Answer: A meowtain. ChatGPT-4o laughed and decided it was a "top-tier dad joke." 

The new model is part of OpenAI's efforts to make ChatGPT into an AI voice assistant (You start exchanges with the wake-up phrase: "Hey, ChatGPT"). The news comes as Google (with its Gemini-powered AI Agents) and Apple (with its Siri assistant) amp up their virtual agents with gen AI. Apple is expected to announce AI enhancements to Siri at its developers' conference in June. 

OpenAI buys paper-based books for its 'old-fashioned' library  

In other OpenAI news, The New York Times got an inside look at the company's headquarters in San Francisco and reported on the two-story, "old-fashioned" library that CEO Sam Altman had built — old-fashioned because it contains physical books (as in paper-based), dark-wood furniture and tasteful Oriental rugs. 

The books on the shelves were recommended by the company's 1,200 employees and the tour provided by the Times shows titles including Frank Herbert's Dune, Neal Stephenson's Cryptonomicon, the Pulitzer Prize-winning biography of Robert Oppenheimer and the story of Edward Shackleton's doomed but amazing adventure to Antarctica with the Endurance (the saga of Mrs. Chippy always makes me sad).

A catalog of the library's collection wasn't shared, but I'm hoping there are some notable children's books in there — anything by Maurice Sendak, Ezra Jack Keats, Ludwig Bemelmans or Beatrix Potter — if the intent is to be truly inspired by inspiring writers and stories. 

Of course, some people will note the irony of OpenAI paying homage to authors and books (and buying books), given the various lawsuits by best-selling authors and other content creators (including the NYT) who say the company has co-opted their copyrighted content without permission or compensation to train its large language model.

"There is something about sitting in the middle of knowledge on the shelves at vast scale that I find interesting," Altman told the paper. 

What a lovely sentiment, Sam. Let's all turn off our devices and support our local libraries and booksellers.

Bipartisan Senate report recommends ways to regulate AI, sort of

Senate Majority Leader Chuck Schumer, who over the past year has led the bipartisan AI Working Group looking into AI, released a report with recommendations about how the US should handle AI. But it actually calls for government agencies and congressional committees to come up with proposed laws and regulations, with Schumer adding that they'll work on individual bills as they come up, with election-related bills a top priority.

"It's very hard to do regulations because AI is changing too quickly," Schumer said in an interview with The New York Times. "We didn't want to rush this."

The group described the 20-page report, called "Driving US Innovation in Artificial Intelligence," as a "roadmap" for US policy and said it was based on nine meetings with more than 150 experts (all those involved are listed in the appendix). Schumer, a Democrat from New York, worked with Sens. Mike Rounds (R-SD), Martin Heinrich (D-NM) and Todd Young (R-IN) on the report. 

The group called for creating a federal data privacy law to protect personal information, and urged the US to spend at least $32 billion a year starting in 2026 and beyond for "non-defense" AI innovation. It also called on AI companies to help guard against deepfakes and other AI-generated content ahead of the November elections.

The push for legislation is one of the biggest moves by the US government to meet the challenges created by the boom in AI technologies fueled by OpenAI and its release of ChatGPT in late 2022, CNET's Ian Sherr reported. In October, President Joe Biden released an executive order directing federal agencies to address AI. The Department of Labor last week released what it calls a "set of principles that provide employers and developers that create and deploy artificial intelligence with guidance for designing and implementing these emerging technologies in ways that enhance job quality and protect workers' rights." Among the eight principles: making sure workers have "genuine input" into employers' AI policies.

Still, the US is behind the European Union, which passed the world's first AI legislation in March. Called the EU AI Act, it calls for safety guardrails around how AI technology is developed, and for consumer privacy protections.  

Microsoft data centers, built to handle AI push, increase carbon emissions

Microsoft released its annual sustainability report last week, and its goal to reduce carbon emissions hit a bit of a snag after its emissions jumped 30% in 2023 over 2022 because of investments in new data centers.

"Data centers are critical infrastructure for running and supporting AI models such as large language models, the technology behind OpenAI's ChatGPT and Google's Gemini, which are seeing surging adoption worldwide," CNET's Sareena Dayaram reported. "Such AI-based services require more of the same power-hungry data centers built from carbon-intensive materials such as steel and concrete."

Microsoft, which uses ChatGPT to power its AI search engine Bing and other AI tools, has reportedly partnered with OpenAI for a data center project set to launch in 2028, Dayaram added, citing Reuters. "This project could cost as much as $100 billion and include an artificial intelligence supercomputer called 'Stargate,'" she said. Microsoft is expected to announce updates related to its AI vision at its annual developers conference, Build, on May 21.

Editors' note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you're reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.