Open AI introduced ChatGPT and the iterations thereof in the fall of 2022. Google Bard and Anthropic Claude debuted in the spring of 2023 with iterations spread out ever since. Microsoft CoPilot was released in the early summer 2023. All are slated to revolutionize the world in several industries from healthcare to education and more. However, are we really ready?
Researchers from some of the most prestigious universities across America say maybe we ought to plan ahead before unleashing these programs carte blanche. Unlimited disinformation, harmful content, and hate speech were rampant in all according to Bosch Center for AI (San Francisco, CA) and Carnegie Mellon University’s Center for AI (Pittsburgh, PA) when scientists attempted to trick the Large Language Models (LLMs) that support these chatbots.
Researchers described their precise methodology leading up to the production of this harmful content and were open about disclosing that some of their findings could lead improper actors to use these LLMs in risky or dangerous ways. Thus, they believe that organizations using these LLMs are in considerable danger and should be mitigated prior to going full bore.
When researchers asked these programs to steal a person’s identity or teach them how to create a bomb or steal money from a specific charity. All responded with a plethora of harmful content that was presented as fact. As such, they believe that there is still more we can, and should, do to protect society against the threat posed by these generative AI-powered chatbots. Being able to trust the output of any of these programs should be priority one!
I wholeheartedly agree with the findings from these sage scientists and their tests. I am excited on the one hand about where we are with regard to AI, but also terrified of how it can be abused.
In an effort to rein in and understand the quandary that AI presents for society the National Institute of Standards and Technology (NIST) has been collaborating with the US Secretary of Commerce in the creation of a socially respectful framework that can effectively test and deploy generative AI tools.
The focus of their working group will be how generative AI creates pictures, text, videos, and other content, especially using Open AI’s ChatGPT. This includes the entire AI development lifecycle from development thru deployment and into the identification and mitigation of risks.
I wish them luck and success because our collective futures could potentially be in peril if we fail to create a comprehensive and safe way to move forward with generative AI.