- Machine Learning tools have aroused many safety concerns, the top being an unclear definition of what safety means.
- The harm AI can cause affects humanity from individual, societal, organizational and environmental levels.
- The world currently fears that machines might become “too intelligent to be controlled”, causing a panic-excitement relationship.
We have just wound down the first quarter of 2023, and a lot is happening in the Artificial Intelligence space. OpenAI has launched ChatGPT and GPT-4, technologies that have disrupted the internet search space. In response, Google and Microsoft have added Google AI and Microsoft 365 copilot to capitalize on the fast-growing AI audience.
Despite these fantastic inventions that have caught the eye of internet consumers, AI has been there for quite a while. AI systems have become part of our everyday lives. They have dominated the technologies in our home devices to the recommendations we get when browsing or even shopping online.
What does safety in AI mean
However, Machine Learning tools have aroused many safety concerns, the top being an unclear definition of what safety means. We will define safety in AI as the deployment of AI in ways that do not harm humanity. Interestingly, Ben Shneiderman, the writer of the book Human-Centered AI acknowledges that total safety is not possible, instead stating that Safer is possible.
Read: AI tools heighten safety and transparency in the NFT market
The harm AI can cause affects humanity from individual, societal, organizational and environmental levels.
Potential harm of AI to individuals and the society
- Harm to a person’s civil rights, and liberties, infringement to their physical and psychological safety and economic opportunities.
- Discrimination against a particular population or sub-group, for example, racism.
- Harm to democratic participation or educational access
The potential harm of AI to organizations
- Harm to an organization’s business operations
- Security breaches
- Damage to their business reputation
The potential harm of AI on the environment
- Overconsumption of electricity. The rise of deep learning and large language models has exponentially increased the computing capacity that AI needs leading to high electricity demand.
- Carbon emissions. The more electricity that AI combusts, the more carbon is emitted. Creators can, however, adopt cleaner energy sources to run their operations.
From a creators’ perspective, we look at what OpenAI (one of the most successful AI companies) is doing to enhance safety in AI.
OpenAI’s approach to AI safety
Before releasing any of their products into the market, OpenAI has confirmed to a system of intense testing, engagement of external experts, reinforcement of response systems through human feedback and the building of general safety and monitoring systems.
In the case of GPT-4, for example, after the training, OpenAI spent six months working on the product to make it safer for use before releasing it to the public.
However, there is a limit on how to interpret an AI system until it launches the product on the market. However much research is done on the pre-launch phase, there is no predicting how users will utilize the technology, positively or negatively.
We cautiously and gradually release new AI systems—with substantial safeguards in place—to a steadily broadening group of people and make continuous improvements based on the lessons we learn – OpenAI.
Measures taken by OpenAI to protect the community
OpenAI further recommends that AI users be 18 years and above or 13 years with parental approval to use their AI products. They have also limited their tools against giving feedback on racist, hateful, gender-based, violent or adult content, among other categories.
We work to remove personal information from the training dataset where feasible, fine-tune models to reject requests for the personal information of private individuals, and respond to requests from individuals to delete their personal information from our systems. These steps minimize the possibility that our models generate responses that include the personal information of private individuals.
OpenAI has also invested in improving factual accuracy. GPT-4, for example, is 40 per cent more likely to produce more accurate content than GPT- 3.5.
The Downside of AI
Despite the measures taken by AI companies, continuous research and user engagement must be key priorities in their operations. The world currently faces the fear that machines might become “too intelligent to be controlled”, causing a panic-excitement relationship among AI users.
The existing AI tools are wholly dependent on existing data, requiring intense human intervention to operate as factual and reliable tools. Regulations need to be drafted to develop a universal alignment and definition of safety in AI. The world is not ready for another tool to manipulate and use their data, regardless of whether they are responsible for feeding the machines.