Google’s Launch of AI Model “Gemini” Shaking Up the Industry

Google on Wednesday made a major stride in artificial intelligence by launching Project Gemini, an AI model developed to mimic human behavior.

As The Associated Press reports, the release of the Gemini model is set to take place in phases. Rudimentary versions known as “Nano” and “Pro” have been greenlighted for immediate incorporation into Google’s Pixel 8 Pro smartphone and the AI chatbot Bard.

The tech company says Gemini will make Bard better at planning-heavy tasks and give it a more intuitive edge. As far as the Pixel 8 Pro, Gemini will ostensibly allow the phone to quickly summarize recordings and enable automatic replies on messaging platforms such as WhatsApp.

Next year will see even bigger changes, such as use of Gemini’s Ultra model to power “Bard Advanced,” a supercharged version of the chatbot that will first be available only to test users.

AP further reported:

The AI, at first, will only work in English throughout the world, although Google executives assured reporters during a briefing that the technology will have no problem eventually diversifying into other languages.

Based on a demonstration of Gemini for a group of reporters, Google’s “Bard Advanced” might be capable of unprecedented AI multitasking by simultaneously recognizing and understanding presentations involving text, photos and video.

Gemini will also eventually be infused into Google’s dominant search engine, although the timing of that transition hasn’t been spelled out yet.

The AI division behind Gemini is Google DeepMind, a London-based operation the search giant acquired nearly a decade ago after beating out other big-time bidders such as Facebook parent Meta. Google combined DeepMind with its “Brain” division, and has had it hyper-focused on developing Gemini.

Google is touting Gemini as being skilled in math and physics, which has many in the AI field hopeful that it will contribute to major breakthroughs in science and technology.

Others however, receive news of the technology’s potential with caution, warning that as AI overshadows human intelligence, it could lead to the loss of millions of jobs and the spread of misinformation, and even result in devastating nuclear weapon launches.

Google CEO Sundar Pichai authored a blog post in which he assured the public that the company is making advances “responsibly.”

“We’re approaching this work boldly and responsibly,” wrote Pichai. “That means being ambitious in our research and pursuing the capabilities that will bring enormous benefits to people and society, while building in safeguards and working collaboratively with governments and experts to address risks as AI becomes more capable.”

Google hopes that Gemini will give its AI game a leg up over the competition. Thus far, the search giant has lagged behind Microsoft, which has positioned itself as a powerhouse thanks to its partnership with OpenAI, developer of the globally popular ChatGPT tool, released late last year.

ChatGPT was made available for free to build anticipation for OpenAI’s most advanced model, GPT-4. Google pushed Bard onto the market in February to try to capitalize off of the AI demand sparked by ChatGPT; but shortly thereafter, in March, OpenAI put out GPT-4, taking the wind out of Google’s sails.

According to a white paper released Wednesday, the most advanced version of Gemini outperformed GPT-4 on grade-school math, multiple-choice questions, and similar benchmarks. However, AI models were found to still struggle when it came to higher-level reasoning activity.

Artificial intelligence continues to evolve and is increasingly being employed in a diverse array of fields. In Romania, the prime minister has added an AI advisor to his Cabinet. There are AI lawyers helping defendants get out of parking tickets and AI writers producing content for top websites.

As The New American has previously reported, AI is increasingly being applied to military purposes, as well. In one notable example, an AI-powered drone aircraft bested a human-controlled aircraft in a dogfight organized by Chinese military researchers.

Earlier this year, over 1,100 professionals in artificial intelligence and related fields signed an open letter calling for a six-month moratorium on “giant AI experiments.” Signatories included Elon Musk, Andrew Yang, and Apple co-founder Steve Wozniak.

The letter argued that “AI systems with human-competitive intelligence can pose profound risks to society and humanity” and that, as a result, innovations should be “planned for and managed with commensurate care and resources” in order to prevent an “out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

As with any technology, AI has the potential for good and for abuse. While calls from top corporations warning of apocalyptic scenarios related to AI certainly stir alarmism, the danger is that the controls they want to impose — which these often globalist executives hope to manage — will be used for their own benefit in order to guide AI’s development in a way that precludes it from being used by or in favor of those with opposing political views. 

For example, their talk of AI spreading “misinformation” inevitably means they fear AI will be used to diffuse right-wing talking points. 

Thus, leaving things largely to the free market is a wise choice when it comes to AI’s future.