I’ve got two big reasons to be skeptical about the budding romance between the AI industry and regulators, and they both have to do with the lowly butterfly. Before we get to my concerns, let’s revisit the recent Senate hearing on “Oversight of AI.”
The hearing took place on May 16 in front of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law. I decided to bite the bullet on this one and watch it myself – all three hours of it. I’m glad I did. I’d encourage anyone genuinely interested in the topic to watch it. It was more thoughtful and informative than I expected.
The best link to find it is here. It includes a video of all three hours of testimony along with the written opening statements of the three witnesses – Samuel Altman (CEO, OpenAI), Christina Montgomery (Chief Privacy and Trust Officer, IBM), and Professor Gary Marcus (Professor Emeritus, NYU, author, independent researcher).
Senator Blumenthal, the presiding chair of the committee, tried to draw Altman into the doomsday scenario for AI by quoting Altman himself, who is on record as saying, “The development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”
Altman didn’t bite. He was coy at the hearing about detailing his worst fears, but he did say that he fears his industry can cause “significant harm to the world,” and “if this technology goes wrong, it can go quite wrong.”
One of my favorite comments from Altman regarding GPT-4 was:
“It’s important to understand and think about GPT-4 as a tool and not a creature. It’s a tool that people have a great deal of control over.”
I couldn’t agree more, and I appreciate the humility here, but I can’t help but notice that Altman regularly plays the other side of this aisle when he speculates about this tool being “the greatest threat to the continued existence of humanity.”
Another of Altman’s big contradictions is that he constantly reminds us how altruistic he and the OpenAI team are, that OpenAI is a nonprofit, and that Altman’s own compensation is nothing like the CEOs of other big tech companies. Meanwhile, no one has done more to escalate the AI arms race than Altman and OpenAI.
I don’t care that OpenAI is a nonprofit. I don’t care that Altman doesn’t get paid what other CEOs do. I know that Microsoft, Google, and others are now locked in mortal combat over this new technology, and OpenAI undeniably helped to kick off this arms race when it cut the deal with Microsoft.
I do believe that Sam Altman is sincere, but I think that other recently viral Sam, namely Sam Bankman-Fried, was sincere too, and look what kind of trouble he caused. Everyone knows about good intentions and the proverbial road to hell. Altman’s over-the-top authenticity makes me nervous.
That said, my hat’s off to the Judiciary Committee for putting this panel of witnesses together. As I mentioned earlier, the conversation was surprisingly interesting and thoughtful.
Ms. Montgomery reminded us that AI is not new and that IBM, for example, has been working on it and deploying it to businesses for decades. She made it clear that everyone today is conflating AI with ChatGPT. AI is much bigger than ChatGPT. AI has been around for over 50 years.
Professor Marcus was also a refreshing witness. Some of the concerns he raised included hard-hitting issues like risks of technocracy, oligarchy, and regulatory capture. These are all concerns I share myself. He was also quite clear that GPT-4 is nowhere near artificial general intelligence (AGI), and he speculated that AGI might even be another 50 years into the future.
That’s refreshing candor from an industry insider and, again, it’s in contrast with Altman, who seems all too ready to invoke the specter of AGI as a way to get attention for his crusade.
One of the most striking moments in the hearing was a comment from the head of the Judiciary Committee, Senator Durbin of Illinois. Durbin said, “I can’t recall a time when industry leaders came to us and pleaded with us to regulate their industry.” He commented that it’s as if the industry is coming to us and saying, “Stop me before I innovate again.”
I’ve got an idea why regulators and the AI industry may be a match made in heaven. They all love rules! The subtitle of the hearing was, tellingly, “Rules for Artificial Intelligence.” The regulators love rules, and AI is literally a bunch of rules (aka algorithms)! Would it be a stretch to suggest that both of these groups would like to see rules play a much larger role in our world today?
Which brings me back to my own concerns, and to the butterfly.

My first concern is what is known as the butterfly effect. The butterfly effect is a concept from the work of meteorologist Edward Lorenz. It’s associated with chaos theory and nonlinear dynamics, and it beautifully captures the idea that all complex systems are very sensitive to small changes.
We really have no idea what this new technology can and will produce yet. ChatGPT is like the proverbial butterfly that flaps its wings in Singapore and ends up impacting the weather in San Francisco.
In watching the committee members and the witnesses discuss these issues, I didn’t see a lot of humility. I saw a lot of smart people with strong opinions about what we should be protecting ourselves against and what we should do now.
Senator Hawley, one of the committee leaders, summed up the concerns he heard during the hearing:
- Loss of jobs
- Invasion of personal privacy
- Manipulation of personal behavior
- Manipulation of personal opinion
- Degradation of free elections in America
I don’t disagree that these are potential risks and concerns. I just don’t agree that they are actual risks and concerns. We don’t need the regulators stepping in now and telling us what they think we need to protect against when the train has barely left the station.
What we do need is transparency – lots of transparency – and a healthy legal system in which companies and individuals can be sued and prosecuted if they use technology to violate existing laws.
My second major concern has to do with what I believe is a fundamental misunderstanding of the nature of AI. The popular idea that AI is potentially “superhuman machine intelligence” is extremely misleading, self-serving, and lacking in evidence.
AI is a new layer in the existing world of computing. AI is, ultimately, nothing but ones and zeros. It is not superhuman or even intelligent. It is artificial. It is rules-based. It is a machine. It is manmade.
As Altman affirmed at the hearing, AI is a tool. Yes, like nuclear weapons, AI is potentially a very dangerous tool, but as with nuclear weapons, the ultimate danger isn’t the tool itself; it’s the people who might use the tool to harm others. No one is claiming that nuclear weapons might fire themselves.
When we forget this obvious fact and we allow ourselves to be seduced by this AI mythology, we open ourselves up to what I regard as the real risk of AI. I call it the “Wizard of Aiz risk.” It’s the risk that we forget that there’s always a person behind the curtain that is ultimately responsible for the actions of the machine.
What exactly does the Wizard of Aiz risk have to do with the butterfly? The butterfly is not a machine. It is not manmade. I can explain AI. I know how it works. I can’t explain the butterfly, and I don’t even need to. I appreciate the butterfly.
The butterfly is one of the most powerful cultural symbols for all of humanity. Throughout the world, the butterfly is a symbol of transformation, hope, faith, and rebirth. The butterfly is fragile, playful, beautiful, and fun.
Do we need the butterfly? Does AI bring us more benefits than the butterfly? Is AI more complex than the butterfly? Would you rather live in a world without AI or without the butterfly?
I know what my answers are to such questions, and I won’t be drawn into the fearmongering of technologists who are ultimately talking their own book.
So, the next time that the AI hype machine gets turned up to 11, remember the butterfly. It will help to keep things in perspective.