I have written extensively previously about ChatGPT and Google Bard, and Microsoft CoPilot. All are forms of artificial intelligence (AI) tools that use natural language processing (NLP) to generate responses in human-like tones. This is termed generative AI. Generative AI infers the use of special algorithms that siphons through scads of data to create an output either in audio or text, or video or images.

OpenAI and Google were clearly the frontrunners in this race. Since then, many more have jumped in. These programs have nearly limitless possibilities, and they are changing our lives in big ways. However, as good as they are they still have inherent risks both legally and ethically. I will highlight some of those risks here for you today.

None of what these programs do is particularly new, but how they do it is. For example, we have been able to ask Siri or Alexis to find songs from certain bands or musicians, tell us what happened on a certain date, provide us with customer reviews for specific products, select seats on an airline, and more. This is what basic AI can do already.

Today’s AI can do far more. It can generate a P & L sheet for a specific company or product, write an article on Elon Musk, and even pull statistics for demographics according to city and locale. How ChatGPT and other AI comes about these various responses can be incomplete, fabricated or based on false information, riddled with biases, prone to misinterpretation, and patently inconsistent depending on how the question is queried. As such we should take a “questioner beware” stance to it until these flaws can be sufficiently remedied or at the very least run our own factcheck.

The response mechanism for AI is only as good as the information that is available. As we know there is lots of misinformation available on the internet. If the chat bot is unable to interpret the question correctly it may pull out outdated or insufficient information to respond to the query.

Building on that misinformation we many also get entirely false information as a response to the query too. This is sometimes referred to as hallucinations in the AI world and is based on random, inaccurate outputs to cloudy questions.

Biases in AI can be quite problematic. For instance, redlining is illegal in the realm of real estate sales and rentals, but if a question is asked of AI in a particular way, it might glean back a response that is biased. Suppose a campaign manager is asking AI to solicit or not solicit people in particular zip codes because they know these people have a very low voting turnout. This can be construed as a bias because it may be directed at migrant workers or certain other characteristics that are illegal, i.e. religion, sex, age, gender, … AI has no way of discerning what is and is not biased on the face of the question. Therefore, this must be learned over time.

Depending on the key words used to ask the question the chat bot may focus on specific key words instead of other factors that have nothing to do with the user’s intent. An incorrect interpretation by the AI could generate inaccurate or misleading information as a reply. Wording in terribly important to obtaining the response you, as the querier, hoped for. Position of words in the query also plays a role in what the AI responds to.

Any of these situations can present unintended risks to the user of the AI program. It is because the culling of information for these chatbots comes from a variety of platforms across the IoT (internet of things) that you may see malware or infested links inserted into your replies that can damage the credibility and integrity of your query and response as well as the vehicle used for the query, i.e. your computer or phone.

The AI used with ill intent could also leak or secure personal information helping criminals to do even more damage to you personally or even financially. AI has been used to create racist and discriminatory commentary, solicit bank or credit card information, develop false narratives that damage business reputations, post company or employee profiles, infringe upon intellectual property rights, and support conspiracy theories. The sky is the limit with what bad actors may be able to inflict upon society using the power of AI.

The America Federation of Television and Radio Artists union and the Screen Actors Guild (since May 2023) are striking because they feel they have been unjustly targeted by chatbots doing their work for free or replicating/creating additional content to their work without consent.

Actors are worried about AI replicating their performances or voices without proper disclosure, payment, and consent. Shenarow, co-founder of Three Point Zero Studios, said “AI should be used as a tool, not as a way to replace us.” In addition to that they want better wages and working conditions along with pension and health benefits. They also want equitable residual payments between the two factions because most of the time revisions or additions to scripts is considered unpaid work.

 

National executive director of SAG-AFTRA, Duncan Crabtree-Ireland, said that the proposals thus far have been anything but groundbreaking. (paraphrase) Some believe that the producers are dragging things out until writers no longer have funds to force them back into negotiations.

Bob Iger CFO of Disney said they managed to make a good deal with the producers and had hoped to do the same with the actors and writers, but what they are asking for is not realistic. He added that this strike has had consequences for the support personnel in the industry too which was just beginning to recover from the Coronavirus Pandemic.

Congress created the Blueprint for an AI Bill of Rights outlining what would be discriminatory and equitable use of algorithms for chatbots. The National Institute of Standards and Technology constructed a framework for users of AI along with guidance for managing the potential risks. More still needs to be done.

These programs can be a real game changer if we can remedy these and other risks. Good AI programs are only as credible as the information they are trained on. It only takes a few bad actors to ruin a good thing for all of us.

Images courtesy of:

https://www.vectorstock.com/royalty-free-vectors/chatgpt-vectors

Resources:

https://www.techtarget.com/searchcustomerexperience/tip/6-risks-of-ChatGPT-in-customer-service

https://www.technologyreview.com/2023/04/03/1070893/three-ways-ai-chatbots-are-a-security-disaster/

https://techpolicy.press/ten-legal-and-business-risks-of-chatbots-and-generative-ai/

https://www.cnbc.com/2023/07/13/sag-actors-union-goes-on-strike-joining-hollywood-writers.html

https://www.axios.com/2023/07/13/sag-aftra-wga-strike-hollywood

https://decrypt.co/148519/actors-approve-strike-ai-fears-bring-hollywood-standstill

 

 

 

 

 

 

Mission

To create freelance content that educates, inspires, delights, and informs!

Vision

To create freelance strategies for organizations and leaders that enhance visibility, builds engagement, and drives credibility and income.

Pin It on Pinterest