Playing With Fire – ChatGPT

Playing With Fire  ChatGPT

The world is very variegated now. For man holds in his mortal hands the power to unmarry all forms of human poverty and all forms of human life.

John F. Kennedy

Humans have mastered lots of things that have transformed our lives, created our civilizations, and might ultimately skiver us all. This year we’ve invented one more.


Artificial Intelligence has been the technology right virtually the corner for at least 50 years. Last year a set of specific AI apps unprotected everyone’s sustentation as AI finally crossed from the era of niche applications to the wordage of transformative and useful tools – Dall-E for creating images from text prompts, Github Copilot as a pair programming assistant, AlphaFold to summate the shape of proteins, and ChatGPT 3.5 as an intelligent chatbot. These applications were seen as the whence of what most unsupportable would be domain-specific tools. Most people (including me) believed that the next versions of these and other AI applications and tools would be incremental improvements.

We were very, very wrong.

This year with the introduction of ChatGPT-4 we may have seen the invention of something with the equivalent impact on society of explosives, mass communication, computers, recombinant DNA/CRISPR and nuclear weapons – all rolled into one application. If you haven’t played with ChatGPT-4, stop and spend a few minutes to do so here. Seriously.

At first tincture ChatGPT is an extremely smart conversationalist (and homework writer and test taker). However, this the first time overly that a software program has become human-competitive at multiple unstipulated tasks. (Look at the links and realize there’s no going back.) This level of performance was completely unexpected. Plane by its creators.

In wing to its outstanding performance on what it was designed to do, what has surprised researchers well-nigh ChatGPT is its emergent behaviors. That’s a fancy term that ways “we didn’t build it to do that and have no idea how it knows how to do that.” These are behaviors that weren’t present in the small AI models that came surpassing but are now seeming in large models like GPT-4. (Researchers believe this tipping point is result of the ramified interactions between the neural network tracery and the massive amounts of training data it has been exposed to – substantially everything that was on the Internet as of September 2021.)

(Another troubling potential of ChatGPT is its worthiness to manipulate people into beliefs that aren’t true. While ChatGPT “sounds really smart,” at times it simply makes up things and it can convince you of something plane when the facts aren’t correct. We’ve seen this effect in social media when it was people who were manipulating beliefs. We can’t predict where an AI with emergent behaviors may decide to take these conservations.)

But that’s not all.

Opening Pandora’s Box
Until now ChatGPT was serving to a yack box that a user interacted with. But OpenAI (the visitor that ripened ChatGPT) is letting ChatGPT reach out and interact with other applications through an API (an Application Programming Interface.) On the merchantry side that turns the product from an incredibly powerful application into an plane increasingly incredibly powerful platform that other software developers can plug into and build upon.

By exposing ChatGPT to a wider range of input and feedback through an API, developers and users are scrutinizingly guaranteed to uncover new capabilities or applications for the model that were not initially anticipated. (The notion of an app stuff worldly-wise to request increasingly data and write code itself to do that is a bit sobering. This will scrutinizingly certainly lead to plane increasingly new unexpected and emergent behaviors.) Some of these applications will create new industries and new jobs. Some will obsolete existing industries and jobs. And much like the invention of fire, explosives, mass communication, computing, recombinant DNA/CRISPR and nuclear weapons, the very consequences are unknown.

Should you care? Should you worry?
First, you should definitely care.

Over the last 50 years I’ve been lucky unbearable to have been present at the megacosm of the first microprocessors, the first personal computers, and the first enterprise web applications. I’ve lived through the revolutions in telecom, life sciences, social media, etc., and watched as new industries, markets and customers created literally overnight. With ChatGPT I might be seeing one more.

One of the problems well-nigh disruptive technology is that disruption doesn’t come with a memo. History is replete with journalists writing well-nigh it and not recognizing it (e.g. the NY Times putting the invention of the transistor on page 46) or others not understanding what they were seeing (e.g. Xerox executives ignoring the invention of the modern personal computer with a graphical user interface and networking in their own Palo Alto Research Center). Most people have stared into the squatter of massive disruption and failed to recognize it considering to them, it looked like a toy.

Others squint at the same technology and recognize at that instant the world will no longer be the same (e.g. Steve Jobs at Xerox). It might be a toy today, but they grasp what inevitably will happen when that technology scales, gets remoter refined and has tens of thousands of creative people towers applications on top of it – they realize right then that the world has changed.

It’s likely we are seeing this here. Some will get ChatGPT’s importance instantly. Others will not.

Perhaps We Should Take A Deep Breath And Think Well-nigh This?
A few people are concerned well-nigh the consequences of ChatGPT and other AGI-like applications and believe we are well-nigh to navigate the Rubicon – a point of no return. They’ve suggested a 6-month moratorium on training AI systems more powerful than ChatGPT-4. Others find that idea laughable.

There is a long history of scientists concerned well-nigh what they’ve unleashed. In the U.S. scientists who worked on the minutiae of the two-bit flop proposed civilian control of nuclear weapons. Post WWII in 1946 the U.S. government seriously considered international tenancy over the minutiae of nuclear weapons. And until recently most nations well-set to a treaty on the nonproliferation of nuclear weapons.

In 1974, molecular biologists were alarmed when they realized that newly discovered genetic editing tools (recombinant DNA technology) could put tumor-causing genes inside of E. Coli bacteria. There was snooping that without any recognition of biohazards and without agreed-upon weightier practices for biosafety, there was a real danger of unwittingly creating and unleashing something with dire consequences. They asked for a voluntary moratorium on recombinant DNA experiments until they could stipulate on weightier practices in labs. In 1975, the U.S. National Academy of Science sponsored what is known as the Asilomar Conference. Here biologists came up with guidelines for lab safety containment levels depending on the type of experiments, as well as a list of prohibited experiments (cloning things that could be harmful to humans, plants and animals).

Until recently these rules have kept most biological lab accidents under control.

Nuclear weapons and genetic engineering had advocates for unlimited experimentation and unfettered controls. “Let the science go where it will.” Yet plane these minimal controls have kept the world unscratched for 75 years from potential catastrophes.

Goldman Sachs economists predict that 300 million jobs could be unauthentic by the latest wave of AI. Other economists are just realizing the ripple effect that this technology will have. Simultaneously, new startups are forming, and venture wanted is once pouring money into the field at an outstanding rate that will only slide the impact of this generation of AI. Intellectual property lawyers are once arguing who owns the data these AI models are built on. Governments and military organizations are coming to grips with the impact that this technology will have wideness Diplomatic, Information, Military and Economic spheres.

Now that the genie is out of the bottle, it’s not unreasonable to ask that AI researchers take 6 months and follow the model that other thoughtful and concerned scientists did in the past. (Stanford took lanugo its version of ChatGPT over safety concerns.) Guidelines for use of this tech should be drawn up, perhaps paralleling the ones for genetic editing experiments – with Risk Assessments for the type of experiments and Biosafety Containment Levels that match the risk.

Unlike moratoriums of two-bit weapons and genetic engineering that were driven by the snooping of research scientists without a profit motive, the unfurled expansion and funding of generative AI is driven by for-profit companies and venture capital.

Welcome to our unflinching new world.

Lessons Learned

  • Pay sustentation and hang on
  • We’re in for a bumpy ride
  • We need an Asilomar Conference for AI
  • For-profit companies and VC’s are interested in progressive the pace