In the tech industry’s nascent AI arms race, GoogleWith most of the latest technology being invented, it should be well positioned to be one of the big winners.
There’s just one problem: Policymakers and regulators may be hesitant to use the many weapons at the internet search giant’s disposal, with their necks down and a highly profitable business model to defend.
Microsoft dropped a direct challenge this week when it sealed the search giant billion dollars Investment in AI research company OpenAI. The move comes less than two months after the release of OpenAIs ChatGPTa chatbot that answers queries with paragraphs of text or code, suggesting how-tos generative AI may one day replace internet searching.
Microsoft executives, who have privileged rights to commercialize OpenAI technology, do not hide this their purpose By using it to challenge Google, it is rekindling an old rivalry that has simmered since Google won the search wars a decade ago.
DeepMind, the London-based research firm that Google bought in 2014, and Google Brain, its cutting-edge research unit at its Silicon Valley headquarters, have long given the search company one of its strongest footholds in artificial intelligence.
Recently, Google broke ground with various changes to the generative AI that underpins ChatGPT, including AI models capable of telling jokes and solving math problems.
one of the most advanced language models known as PALMis a general-purpose model that is three times larger than GPT, the AI model underlying ChatGPT based on the number of parameters the models are trained on.
Google’s LaMDA chatbot, or Language Model for Dialogue Applications, can speak to users in natural language, similar to ChatGPT. The company’s engineering teams have been working for months to integrate it into a consumer product.
Despite technical advances, most of the latest technology is still only a research object. Google’s critics say it is in the lucrative search business, which prevents it from applying generative AI to consumer products.
Sridhar Ramaswamy, a former senior Google executive, said that answering queries directly instead of simply directing users to suggested links would result in fewer searches.
That left Google with the “classic innovator’s dilemma” — a reference to Harvard Business School professor Clayton Christensen’s book, which tries to explain why industry leaders often fall prey to fast-moving startups. “If I was running a $150 billion business, I’d be scared of this thing,” Ramaswamy said.
“We have long focused on the development and application of artificial intelligence to improve people’s lives. We believe that artificial intelligence is a foundational and transformative technology that is incredibly useful for individuals, businesses and communities,” said Google. However, the search giant “must consider the broader societal impact these innovations may have.” Google added that it will announce “more external experiences soon.”
While it may lead to fewer searches and less revenue, the spread of AI could lead to higher costs for Google.
Ramaswamy estimated that, based on OpenAI’s pricing, it would cost $120 million to use natural language processing to “read” all the web pages in a search index, and then use it to generate more direct answers to the questions people type into the search engine. . Analysts at Morgan Stanley, meanwhile, estimated that answering a search query using language processing costs about seven times more than a standard internet search.
The same considerations could keep Microsoft from overhauling its Bing search engine, which generated more than $11 billion in revenue last year. But the software company said it plans to use OpenAI technology across all of its products and services, potentially leading to new ways to present relevant information to users while they’re inside other apps, thus reducing the need to refer to a search engine.
A number of former and current employees close to Google’s AI research teams say the biggest constraints on the company releasing its AI have been concerns about potential damages and how they would affect Google’s reputation, as well as concerns about underestimating the competition.
“I think they were asleep at the wheel,” said a former Google AI scientist who now runs an artificial intelligence company. “Frankly, not everyone appreciated how language models would disrupt search.”
These challenges are political and regulatory concerns Google’s growing power has resulted from greater public scrutiny of the industry leader as well as the adoption of new technologies.
According to one former Google executive, company executives worried more than a year ago that sudden advances in AI’s capabilities could trigger a wave of public concern about the impact of such powerful technology in the company’s hands. Last year, it appointed former McKinsey executive James Manyika as a new senior vice president to advise on the broader social implications of its new technology.
Manyika said the generative artificial intelligence used in services like ChatGPT is prone to giving wrong answers and can be used to create misinformation. Speaking to the Financial Times a few days before ChatGPT’s release, he added: “So we’re not in a rush to put these things out in the way that people expect us to.”
However, great interest Instigated by ChatGPT, pressure has intensified on Google to adapt to OpenAI more quickly. That left it with the challenge of showcasing its AI prowess and integrating it into its services without damaging its brand or causing a political backlash.
“If they write a sentence that is hate speech for Google and it’s close to the Google name, that’s a real problem,” said Ramaswamy, co-founder of search startup Neeva. Google has higher standards than a startup that can claim its service is an objective summary of content available on the web.
The search firm has previously come under fire for its approach to AI ethics. In 2020, there was an uproar over Google’s approach to the ethics and safety of AI technologies when two prominent AI researchers parted ways in contentious circumstances after objecting to a research paper assessing the risks of language-related AI.
Such incidents have put it under more public scrutiny than organizations like OpenAI or open-source alternatives like Stable Diffusion. The latter, which creates images from text images, has a number of security issues, including the creation of pornographic images. Its security filter can be easily hacked, according to AI researchers, who say the relevant lines of code can be removed manually. Its parent company Stability AI did not respond to a request for comment.
OpenAI technology has also been abused by users. In 2021, an online game called AI Dungeon licensed GPT, a text generator for creating your own storylines based on individual user instructions. Within months, users were creating a game that included child sexual abuse, among other disturbing content. OpenAI finally credited the company with introducing better moderation systems.
OpenAI did not respond to a request for comment.
A former Google AI researcher said the backlash would have been worse if something like this had happened at Google. With the company now facing a serious threat from OpenAI, it’s unclear whether anyone at the company is willing to take on the responsibility and risks of releasing new AI products sooner.
However, Microsoft faces a similar dilemma about how to use the technology. It has tried to present itself as more responsible than Google in the use of artificial intelligence. Meanwhile, OpenAI warned that ChatGPT is prone to inaccuracies, making it difficult to deploy the technology in its current form as a commercial service.
But in the most dramatic demonstration yet of the power of artificial intelligence sweeping the tech world, OpenAI warned that even stalwarts like Google could be at risk.