A few weeks ago, my wife and I made a bet. I said there was no way ChatGPT could mimic my writing style for smartwatch reviews. I already asked the bot to do this months ago and the results were hilarious. My wife bet they could ask ChatGPT the same thing, but they might get something many better result. My problem, they said, was that I didn’t know the right questions to get the answers I wanted.

Unfortunately, they were right. ChatGPT wrote many better reviews As me when my wife asked.

This memory crossed my mind while blogging at Google I/O. This year’s keynote was a two-hour thesis on AI, how it will impact Search, and all the ways it’s going to happen. boldly and responsibly make our lives better. Most of it was neat. But when Google publicly admitted that asking AI the right questions is difficult, I felt a shiver run down my spine.

During the demo of Duet AI, a suite of tools that will work within Gmail, Docs and more, Google showed off a feature called Sidekick, which can offer you suggestions that actively change based on the Workspace document you’re working on. In other words, it provokes you on how to report he by telling you what it can do.

This came up again at the keynote when Google showed off a new AI search engine called Search Generative Experiment (SGE). SGE takes any question you type into the search bar and creates a mini report or “snapshot” at the top of the page. Below that snapshot are the next questions.

As a man whose job it is to ask questions, he found both demonstrations troubling. The queries and prompts Google uses on stage are nothing like the ones I type into my search bar. My search queries often read like a talking baby. (They’re usually followed by “Reddit,” so I’m getting a response from a non-SEO content mill.) Things like “Bald Dennis BlackBerry movie actor name.” When I search for something I wrote about Peloton’s 2022 earnings, I go to “Site:theverge.com Peloton McCarthy ship metaphors”. Rarely is the question “What should I do in Paris for a weekend?” I’m looking for things like I don’t even think to ask Google for such things.

Admittedly, I don’t know what to make of any generative AI. I can watch a million demos and still get taunted by the blank window. It’s like I’m back in second grade and my exasperated teacher calls me on a question I just don’t know the answer to. When I do ask, the results I get are ridiculously bad – things that would just take a lot longer than I would have done myself.

My wife, on the other hand, took to AI like a fish to water. After our bet, I watched them play ChatGPT for an hour. What struck me the most was how different our proposals and requests were. Mine was short, open and wide. My wife left little room for interpretation in the AI. “You have to hold the hand,” they said. “You should feed him exactly what you need.” Their commands and queries are hyper-specific, long, and often include reference links or datasets. But even they have to repeat prompts and requests to get exactly what they’re looking for.

SGE snapshots also tell you what to ask next.
Image: Google

It’s just ChatGPT. Google’s offerings go one step further. Duet AI is designed to extract contextual information from your emails and documents and intuit what you need (which is ridiculous because half the time I don’t even know what I need). SGE is designed to answer your questions, even those that don’t have a “right” answer, and predict what you might ask next. For this more intuitive AI to work, programmers must make it so that the AI ​​knows what questions to ask users so that users can in turn ask it the right questions. This means that programmers need to know what questions users want answered before they even ask them. It gives me a headache to think about.

Not to get too philosophical, but you could say that all of life is about finding the right questions to ask. To me, the most troubling thing about the age of artificial intelligence is that I don’t think any of us know what it is. indeed I want from AI. Google says it has everything it showed on stage at I/O. OpenAI thinks it’s chatbots. Microsoft thinks it’s a really perverted chatbot. But these days, when talking to the average person about artificial intelligence, the question everyone wants answered is simple. How AI will change and impact my life?

The problem is that no one, not even bots, has a good answer for this. I don’t think we’ll have any satisfactory answers until everyone takes the time to rewire their brains to talk to AI more fluently.

Leave a Reply

Your email address will not be published. Required fields are marked *