Luke McGregor

ChatGPT has taken the world by storm.

It’s exciting to see this next generation technology being used to make life easier through very human-like interaction between man and machine.

You can ask it any question and receive an answer that sounds like one given by a real person.

And like humans, ChatGPT’s answers are limited to the data AI has learned from, and its answers get better and better as its data grows.

Recently there has been much talk about the dangers of AI technology and the potential of a regulatory response to address this. Most notably in the USA, there was a congressional hearing with Sam Altman (OpenAI) and several other people from organisations in the AI space.

Here’s the focal points of what Altman said regarding regulation of this technology:

First, it is vital that AI companies–especially those working on the most powerful models–adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results. To ensure this, the US government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements.
Second, AI is a complex and rapidly evolving field. It is essential that the safety requirements that AI companies must meet have a governance regime flexible enough to adapt to new technical developments. The U.S. government should consider facilitating multi-stakeholder processes, incorporating input from a broad range of experts and organizations, which can develop and regularly update the appropriate safety standards, evaluation requirements, disclosure practices, and external validation mechanisms for AI systems subject to license or registration.
Third, we are not alone in developing this technology. It will be important for policymakers to consider how to implement licensing regulations on a global scale and ensure international cooperation on AI safety, including examining potential intergovernmental oversight mechanisms and standard setting.

While from the outside asking for complex licensing, constantly changing goalposts and expensive testing procedures might seem unlikely from the CEO of an AI company, it’s important to understand how these regulatory changes would benefit OpenAI, and conversely hurt other businesses.

OpenAI is a large player in the AI space, made even larger by their recent acquisition by Microsoft. This gives them the size to weather the prohibitive cost of regulatory compliance. Regulation will create barriers to entry for new competitors and consolidate more of the AI problem space into exceptionally large companies. This would be extremely profitable for the players that have already established themselves in the space, such as OpenAI, Google and Microsoft.

Regulation is unlikely to move at the pace of technical development in the AI space and if it did, it would be almost impossible to keep up with those changing regulations.

The technology behind OpenAI is mostly not defensible IP, other companies with enough money to train a model could compete with OpenAI's product. There are currently a wide variety of open-source models that differ from ChatGPT mostly in the quantity of training rather than the sophistication of the model itself. It's likely that regulation could cull off many emerging competitors to OpenAI, giving OpenAI some breathing space to consolidate their position.

A more altruistic regulatory suggestion came from Christina Montgomery of IBM, which was transparency on when AI is in use:

Be Transparent, Don’t Hide Your AI

Americans deserve to know when they are interacting with an AI system, so Congress should formalize disclosure requirements for certain uses of AI. Consumers should know when they are interacting with an AI system and whether they have recourse to engage with a real person, should they so desire. No person, anywhere, should be tricked into interacting with an AI system. AI developers should also be required to disclose technical information about the development and performance of an AI model, as well as the data used to train it, to give society better visibility into how these models operate. At IBM, we have adopted the use of AI Factsheets – think of them as similar to AI nutrition information labels – to help clients and partners better understand the operation and performance of the AI models we create.

This seems like a far more useful regulation, not only would it be inexpensive to implement and wouldn't lock out inexperienced players, but it would also provide users with informed choice.

The regulation of AI technology will focus control into exceptionally large companies that can stifle innovation.

AI is heavily based on data, and the total capabilities of any system are limited by the quantity and quality of training data. One of the fundamental ways of protecting people from the negative impacts of AI is to control the data that users give to such systems.

As with many technologies, there are implications of sharing data. A better understanding about the personal costs of sharing data with AI will help us all make more informed decisions about who we share data with and what we let them do with it.

Consumers should be looking for products that provide us strong guarantees of privacy and data security. Realistically we need to understand that this comes with an increased direct financial cost in exchange for our long-term digital security.

Luke McGregor is a software architect at software specialist Company-X.