Dear Sapiens,
AI Doomsday evangelism is on the rise as AI pioneers and Industry leaders desperately seek governmental regulation over fears that AI poses "an existential crisis" on the level of Nuclear Weapons. As a lover of all things tech and AI, I admit to being biased on this subject.
However, even those casually interested in these new, novel Generative AI technologies must find themselves scratching their heads. Are nonembodied chatbots and photo generators really the precursors to a “Terminator” like, cataclysmic event that will bring about the sudden extinction of us Sapiens? Are our tech oligarchs who stand to profit billions if not trillions, as altruistic as they want us to believe?
News
Doomers Sound the Alarm, Regulate or Go Extinct
Over the past few weeks, media sites have been inundated with news that recent advances in AI may pose serious threats to our democracy, intellectual property, consensus on factual information, and in the worst scenario ‘risk of extinction’.
Key Insights
It all started in late March, when the Future of Life Institute released a petition calling for a six month moratorium on the development of AI models that exceed the capabilities of GPT-4. It was signed by highly notable figures across a myriad of professions and industry, including the Godfather of AI Geoffrey Hinton, Elon Musk, Steve Wozniac, Emad Mostaque of Stability.AI, Andrew Yang and thousands of professionals from esteemed institutions.
In early June, Safe.AI followed up with a petition of their own, signed by hundreds of heavy weigths in the field of AI, Academia, and Industry. In the statement, they cite:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Public Reception
Not all AI Scientist agreed that existing AI systems posed an existential threat and even if they did, would a six month moratorium really mitigate the risk? How would we enforce it? Would it enable America's adversaries to catch up in the "Arms Race" of a lifetime? These are the questions raised by critics.
A few shakers in the industry, most notably Andrew Ng, founder of DeepLearning.AI and Dr. Yann Lecun, Chief AI Scientist for Meta, shared the mutual view that Artificial General Intelligence (AGI) is far beyond our current systems.
Andrew Ng in a recent transcript with Lecun, argues that concerns over super intelligent AI systems is less valid than more realistic and legitamate issues. "I feel like while AI today has some risks of harm. Bias, fairness, concentration of power, those are real issues.", he stated.
Lecun, mostly agreed adding:
And until we have some sort of blueprint of a system that has at least a chance of reaching human intelligence, discussions on how to properly make them safe and all that is premature, because how can you, I don't know, design seatbelts for a car if the car doesn't exist?
Our Take
Given the revolutionary advances in AI, we understand why the public, institutions, and organizations of all types are concerned about how this technology could be used by bad actors to negatively impact society.
Although, we agree that regulation will be required to mitigate these risks, it's critical that experts take a responsible and realistic approach to AI Saftey, refrain from exaggerating the threat of existing AI systems by using terms like 'extinction', and recommend viable solutions to real problems such as those raised by Andrew Ng.
ChatGPT Hallucinates Legal Cases, Lawyer Busted
In a comical and yet bizarre set of events, a Lawyer from NYC risks potential sanctions for citing fabricated cases generated by ChatGPT, on at least two occasions while representing his client in a tort lawsuit against Avianca Airlines.
Key Insights
-
The Client, Roberto Mata is suing Avianca Airlines for negligence as a result of his allegations that a metal serving cart struck his knee during a flight from El Salvador to JFK Airport in NYC.
-
Avianca Airlines filed to have the case dismissed.
-
An attorney conducting research on the case at Levidow & Oberman used ChatGPT to form a briefing objecting to the dismissal, citing multiple cases as evidence of precedent in their attempt to move forward with the case.
-
Attorneys of Avianca Airlines claimed they could not find the cases in any legal database.
-
Judge ordered Mata's attorneys to provide copies of the cases.
-
Mata's Attorneys provided affidavits admitting to the use of ChatGPT for research but no intent to deceive or fabricate. Still, re-presenting copies of the bogus cases.
-
Judge Kevin Castel issues "show cause" order requiring Mata's attorneys to present a valid reason for why they should not be sanctioned by the court.
Our Take
On the surface, this report is a bit comical, you'd think that two professional attorneys from a competitive metropolitan like NYC would have rigorous fact checking standards. According to the New York Times article, they failed to conduct independent verification two times while using ChatGPT for legal research.
Professionals must understand that LLM's like ChatGPT are not "intelligent" or "aware" but rather advanced calculators for natural language. In their attempt to predict a quality response, they are likely to make errors and even "hallucinate" false information in an eloquent and matter-of-fact way. Therefore, any information used for professional purposes should be independely verified.
Nvidia Moons 🚀, Briefly Hits 1 Trillion
Propelled by AI-hype, excitement over Generative AI, and the need for evermore powerful GPUs to fuel the energy needs of LLMs, Nvidia's market cap briefly hits one trillion.
Check how it has stacked against other members of the club:
Note that, Petro China, the first company to ever reach 1 Trillion is not displayed in this chart illustrated by the Visual Capitalist.
Key Insights
-
The trillionaire club worldwide is less than 10.
-
Nvidia has become the 9th company to ever top $1 Trillion in market value, according to Bloomberg.
Our Take