Key points:
- Despite Google's vast resources and capabilities, it was slow to venture into the ChatGPT domain.
- Several factors contribute to this, including the "big company tax" that often slows innovation within large corporations, excessive internal planning processes, and the greater reputational risk and scrutiny faced by a company of Google's size.
- Google's susceptibility to external pressures, media scrutiny, and societal trends can impact its decision-making and innovation speed.
- Ethical and responsible AI considerations, while important, can be challenging for Google due to the greater criticism it faces in this domain, even when taking proactive measures.
OpenAI's ChatGPT emerged as the new, fascinating frontier in the world of artificial intelligence. The global community stands in awe, with entrepreneurs on YouTube exploring opportunities to build ventures around it. Predictably, discussions have surfaced on how this groundbreaking technology will generate new employment opportunities.
Of course, it wouldn't be the digital age without the customary influx of articles and experts deliberating on the potential downsides and challenges. Many of these concerns are entirely valid, and it is imperative to engage in open, balanced discussions encompassing all facets, both favorable and adverse. The key is to approach these discussions rationally, steering clear of undue alarm—a nod to Jeff Jarvis for coining the term "moral panic" which underscores the need for measured discourse.
However, my intent today isn't to delve into ChatGPT; instead, I'd like to pose a question: Why has Google not ventured into the domain of ChatGPT first?
Google, renowned for its substantial investments in AI, boasts unrivaled access to cutting-edge hardware, a reservoir of intellectual talent, relentless research efforts, and vast data resources. So, what's holding them back (or.. did at least)?
The reasons, while straightforward and widely acknowledged, are worth revisiting:
Big company tax
Innovation often proceeds at a glacial pace within large corporations. It's not that Google lacks the ambition or capability to develop a conversational AI agent; rather, it's the protracted journey from concept to launch that stifles progress. For instance, Google introduced LaMDA in may 2021, yet its internal processes resemble an elephant lumbering uphill with a mountain on its back, compounded by a bureaucratic labyrinth more intricate than the Great Wall of China.
Excessive Planning (internal velocity)
The corporate landscape at Google is marked by an abundance of planning stages—annual OKRs, quarterly OKRs, and OKRs layered five-deep. Each layer embarks on verbose debates regarding the wording and adherence to the SMART criteria, grappling with the elusive measure of key results. These OKRs manifest in various formats, from slides to spreadsheets and even custom-built tools, all originating from someone's 20% time. Engineering, product management, and operations teams all wield their individual OKRs, fostering an environment where anything out of alignment is perceived as anathema. The proposed remedy? A consolidated list of OKRs, though not before someone crafts a 300-row spreadsheet labeled "dependency tracking OKRs," illustrating the classic idiom of "can't see the forest for the trees."
Big Company Equals Bigger Target Tax (legal, PR)
In the words of Jeff Dean, "the company faces substantially more reputational risk" compared to smaller startups. The specter of litigation, PR crises, lawsuits, and regulatory scrutiny looms large, not to mention the constant scrutiny by "anti-big tech" voices that cast a shadow over Google, amplifying the potential negative repercussions of any endeavor.
Susceptibility to social trends and controversies.
Externally, the media, journalists and bloggers tend to magnify issues that, in my opinion, are often exaggerated. In the contemporary landscape, criticizing big tech has become a surefire way to attract clicks and views.
For example, there were exaggerated narratives surrounding tech employees' stance against government contracts, exemplified by Google employees protesting a Department of Defense contract. In a similar vein, the media seized upon a petition signed by around 4,000 employees, suggesting they represented a significant faction within Google. To put this into perspective, that amounts to less than 4% of Google's workforce, a fraction that hardly resonates as a definitive representation, particularly in a world where presidential elections are decided by a 51%/49% margin. Slowing down execution, shifting focus and distraction from moving forward and #winning marketplace.
Emphasis on ethical and responsible AI.
Do not get me wrong, I am a big believer and invested in responsible product development, fairness in ML and responsible AI. I dedicate much of my time to these areas and believe significant though and effort should be dedicated to this aspect. But with that said, we should acknowledge the following…
Google is more vulnerable to criticism in this domain no matter what they do. For example, Google's dismissal of Timnit Gebru, a prominent figure in the Ethical A.I. team, in December 2020 generated headlines, as did a seemingly minor issue that snowballed into a major controversy involving a Google engineer claiming an AI chatbot possessed sentience. Google had and still has large teams, hundreds if not thousands of people across policy, product managers, engineers, researchers, costing Google millions on salaries alone for people to work on these topics. Yet for the outside, it is enough to find single digit people that have diversity of opinions with the mothership to divert 99% of their attention and see what Google is doing through their lens.
The veracity of these claims is not the focal point. What bears consideration is the cumulative effect of media scrutiny, societal discourse, and governmental attitudes directed at Google over the past half-decade. It should come as no surprise that, akin to a healthy organism developing antibodies to ward off potential threats, Google has cultivated safeguards within its structure. These safeguards operate to temper and scrutinize initiatives that might incite controversy and jeopardize the host organism.
Google is making changes, adapting. At least it tries to. There are plenty of smart, well-intentioned people at Google, its leadership... like in most (but not all) companies. Google, as a big and deep-pocketed company, certainly has its advantages, but addressing the challenges above creates non-trivial headwinds.