Safeguards for using ChatGPT and other bots for HR
Author: Natasha K. A. Wiebusch
ChatGPT and other chatbots have captured the attention of HR leaders everywhere. Though powerful, these tools pose new problems HR should not ignore. Natasha K.A. Wiebusch, one of the HR & Compliance Centre legal editors in the US, reviews four key problems with chatbots - and suggests ways to deal with them.
Chatbots have become increasingly popular in recent years, revolutionising the way businesses interact with customers and automate their operations.
That sentence was written by ChatGPT in response to the query, "Can you write an intro sentence to my article about chatbots?"
As shown above, AI chatbots like ChatGPT provide written responses to questions from users. Though incredibly powerful, users everywhere are noticing that these bots sometimes provide incomplete or incorrect answers. On occasion, they even make things up without being prompted. (In the AI industry, these are called hallucinations.)
Still, these tools have their strengths, and they can help HR pros in many ways. In the future of work, HR leaders will need to learn how to effectively use AI as a partner.
Chatbot problems HR should know about
Inaccuracies and hallucinations aren't the only problem with AI. Warnings about this technology range from job eliminations to a Terminator-esque AI takeover. Though that is probably far-fetched, even Google's CEO, Sundar Pichai, admitted, "Some AI systems are teaching themselves skills that they weren't expected to have," and it is not well understood how it happens.
AI takeover aside, not every concern carries the same weight. Here are four key concerns that HR should know about:
1. Bias
ChatGPT has "shortcomings around bias", according to OpenAI CEO Sam Altman. Without the oversight of a trained user, ChatGPT and other chatbots may provide responses that perpetuate racism, sexism and other forms of discrimination.
There are several potential reasons for this. First, AI algorithms are built by humans who have biases of their own, and they pull from resources that may be biased. Also, AI uses heuristics, which are short cuts made to solve problems quickly. We (humans) also create heuristics, and they happen to be one of the primary drivers of unconscious bias.
Safeguards against chatbot bias:
- ensure that all employees understand what bias is and how to identify it;
- independently audit chatbot responses for bias; and
- implement anti-bias standards for tasks that are known pain points for bias, like job descriptions, performance reviews and pay decisions.
2. Inaccuracy
Chatbots sometimes provide incorrect answers to user questions. For example, while researching chatbots for this article, HR & Compliance Centre tested ChatGPT. We asked, "Am I required to provide employees with employee leave in Arkansas?" ChatGPT incorrectly responded that Arkansas has no other leave requirements aside from what is required by the FMLA.
The problem with inaccuracies is not just that they are incorrect. There is also no way of knowing what is correct and what is not. Unfortunately, chatbots do not explain how they came up with their responses. In the AI industry, this is called the "black box" of the AI system. The black box represents the AI's decision-making processes that we do not quite understand.
For now, the only way of knowing when a chatbot has made an error is if the user already knows the answer to their own question.
Safeguards against inaccuracies:
- thoroughly research the chatbot's capabilities and best uses;
- set clear parameters for what types of tasks chatbots can be used for;
- ensure that chatbots are maintained through employee monitoring;
- require that chatbot outputs be independently verified; and
- prohibit the use of chatbots for advanced research and compliance questions.
3. Cybersecurity and privacy
Chatbots have coding capabilities that may attract hackers who want to enhance their phishing attempts and malware. If a chatbot is hacked on an employer's website, it can lead to large security breaches and liability for the employers.
Also, be aware that chatbots are not privacy-friendly. For example, ChatGPT's privacy policy specifically states that ChatGPT collects personal information, including your name, contact information, content included in queries, your location and IP address, among other information. That information may be provided to third parties.
Safeguards for cybersecurity and employee privacy:
- consult with the company IT team to ensure leading practices are followed;
- thoroughly research chatbots before choosing one to ensure it is reputable and uses high-quality data;
- do not provide chatbots with personal identifiable information or personal health information; and
- implement encryption, authentication and other security systems to prevent the chatbot from being misused.
4. User error
Finally, many employees may not know how to use a chatbot. Employers must recognise that chatbots are unlike anything we have ever seen before, and they require upskilling. Employees will need to understand how these new tools work, what their limitations are, and how to audit and maintain them.
Safeguards against user error:
- train employees in how chatbots work, AI ethics and relevant policies; and
- establish a gradual adoption plan that allows employees time to understand their new partner.
Legal safeguards
As AI continues to evolve, so does the regulatory landscape. In addition to implementing internal safeguards, employers will need to remain vigilant of new legal and political developments related to AI.
For example, in 2020, Illinois passed the Artificial Intelligence Video Interview Act to regulate the use of AI software to assess video interviewees for jobs. This July, New York City will begin implementing Local Law 144, which prohibits employers from using AI in recruitment and promotion decisions without first auditing the AI for bias. And, in 2022, the White House issued a Blueprint for an AI Bill of Rights.
Chatbots are here to stay
According to a recent survey by Eightfold AI, 92% of HR leaders plan to increase AI use in at least one HR area, showing that AI really is HR's newest partner.
What is clear is that, despite its growing pains, ChatGPT has the capability to add efficiencies to work by completing certain tedious and repetitive tasks faster. In doing so, it can free up humans to do the complex thinking required for more important tasks. In the spirit of embracing the future of work, let's finish this how we started:
While this technology presents exciting opportunities, it also raises important challenges that must be addressed to ensure its safe and responsible development and use. By doing so, we can harness the full potential of AI to create a better future for all. - ChatGPT.