How the company is repeating the mistakes of Elizabeth Holmes.

662544Ba Cd3E 4841 Ae88 5B5F58D90Cbe

The new AI features that Google announced a few weeks ago are finally hitting the mainstream, though not in a way that Google might have chosen.

As you’ve learned from recent coverage and conversations (or even experienced yourself), there are now so many auto-generated AI Insights sitting on top of Google search results that the answers are … well, to call them. wrong true, but not fully understood. try it surreal and ridiculous and potentially dangerous instead of. After submission, AI Reviews recommended users smoking during pregnancyadd glue to them home baked pizza, spray used antifreeze in the meadows and to boil mint treat appendicitis.

Google appears to fix wrong answers to both simple and humorous questions touches every event edit one by one and relevant Views. Still, Google’s top-ranked answers can even extend to other features of the search engine, such as its automatic calculator: A user living in the United States, posts a screenshot X, Google’s technology couldn’t even scan this device centimeters stands for centimetersread the event in its entirety meters. Search engine optimization expert Lily Ray claimed that independently confirmed this finding.

The massive spread of AI Insights has prompted users and analysts to share another, even more erroneous, Google discovery: The basic Gemini bot seems to be laying eggs “Answers” first after find quotes. This process leads to many old, spammy and broken links show as supporting information for these answers. Nevertheless, Google-which still sweeping Despite losing some of its market share recently, it wants heaps of digital ad dollars enter more ads To reviews, some may be “Powered by AI” themselves.

Meanwhile, too appearance AI Views redirects traffic from more reliable sources that usually appear on Google already. Contrary to CEO Sundar Pichai’s statements, SEO experts have found that links are shown in Views. they don’t win many click boosts from their placement. (This factor, along with misinformation, is only part of the reason for the majority main news organizations, including Slate, refused access to AI Reviews. A Google spokesperson told me that “such analyzes are not a reliable or comprehensive way to estimate traffic from Google Search.”)

Ray’s research shows that Google drives traffic to publishers as a result of search generally goes down This month it goes with more visibility to posts from Reddit – incidentally, the site that is the source for the popular pizza glue recommendation. signed million dollar contracts More in favor of it with Google. (A Google spokesperson responded: “This is by no means a comprehensive or representative study of traffic from Google Search to news publications.”)

It’s likely that Google was aware of all the problems before moving AI Views to prime time. Pichai describes chatbots’ “hallucinations” (i.e., their tendency to make things up) as “characteristic feature” and even admitted that such tools, engines and datasets “are not always the best approach to getting to the facts”. Pichai told the Verge that this is something he thinks Google Search data and capabilities will fix. In terms of Google’s algorithms, this seems questionable prevents the search from appearing from various reliable news sources and may alsoburning small sites intentional,” noted SEO expert Mike King in his recent examination of leaked Google Search documents. (A Google spokesperson claims this is “categorically false” and that “we will refrain from making inaccurate assumptions about Search based on out-of-context, out-of-date, or incomplete information.”)

More points: Google’s flawed AI has been in the public eye for some time to work Now. In 2018, Google showed off its voice assistant technology that could call and answer people in real time, but Axios found that the demo they actually used it not live chats, but pre-recorded chats. (Google declined to comment at the time.) Google’s pre-Gemini chatbot Bard debuted in February 2023 and gave the wrong answer, which temporarily depressed the company’s stock price. Later that year, an impressive video presentation of the company’s Gemini multimodal artificial intelligence was shown. edited after the fact, to make his judgment appear faster than it actually is. (Point to the next one stock price depression.) And it wasn’t just Gemini at the company’s annual developer conference a few weeks ago creator but to emphasize wrong offer to fix your film camera.

In fairness to Google, which has been working on AI development for a long time, the rapid adoption and hype of all these tools is likely its way of adapting to the era of ChatGPT – a chatbot, by the way. still creates significant amount of incorrect answers in various topics. Other companies following AI trends that soften investors aren’t making their own ridiculous mistakes or faking their most impressive demos.

Last month, Amazon’s AI-powered, unmanned “Just Go” grocery store concept actually featured … many people behind the scenes to monitor and program the shopping experience. Similar results supposedly “Powered by AIAn unmanned swipe system used by chains such as Checkers and Carl’s Jr. There are also “driverless” Cruise cars. remote human intervention is required almost every two miles. ChatGPT parent company OpenAI is not immune to this, employing many people clean and polish The animated visual landscapes are created by instructions given to him as if wholesale not yet public Sora photo and movie generator.

All of this, we remind you, is another layer of labor hidden above the human operations outsourced to countries such as Kenya, Nigeria, Pakistan and India where workers are underpaid or allegedly subjected to “forced” conditions.modern slavery” to consistently feed feedback to AI bots and tag egregious images and videos for content moderation purposes. Remember, people working in data centers, chip makers, and power generators can even power all of these.

So let’s recap: After years of annoying, debunked claims, staged demonstrations, refusal to provide additional transparency, and the use of “humanless” branding, while actually putting many people to work in various (and harmful) ways, this AI creation still stands. bad. They continue to widely make things up, plagiarize from educational sources, and offer information, advice, “news” and “facts” that are false, nonsensical, and potentially dangerous to your health, political bodies, people trying to do simple math, and others scratching their heads and trying to understand where is their car’s “blinker fluid”.

Does this remind you of anything else in the history of technology? Perhaps Elizabeth Holmes, who herself has produced many demos and made fantastic claims about her company Theranos, simply to sell an impossible “technological innovation”?

Holmes is now behind bars, but the scandal still lingers in the public imagination. In retrospect, such should have been the bright signs revealed, Yeah? His biotech startup Theranos had no health professionals on its board. He made wild scientific claims unsupported by any authority and refused to explain any justification for these statements. Formed partnerships Without checking the security of its access with massive (and virtually trusted) institutions like Walgreens. It made a deep, frightening impression culture of privacy forced to sign aggressive contracts among its employees. It brought incredible endorsements from famous and powerful people Vice President Joe Biden, by sheer force of fear alone. And he constantly hid everything that really strengthened his systems and creativity, until stubborn reporters looked for themselves.

Nearly 10 years have passed since Holmes was finally exposed. It’s clear, however, that the crowd of tech watchers and analysts who take him at his word are also willing to give all their credence to the humans behind these errant, confusing, behind-the-scenes AI bots that their creators promise. , will change everything and everyone. Unlike Theranos, of course, companies like OpenAI have actually developed products that are functional for public consumption and have achieved some impressive feats. However to hurry forcing this thing everywhere, taking on the supposed duties not close to being madeand for keep approachable despite the not-so-obscure experience of missteps and mistakes—which is where we again borrow from the Theranos playbook. We haven’t learned anything. The masters behind chatbots that actually teach you nothing can actually be preferred that.

Exit mobile version