insight

When AI, Ethics and Data Privacy Collide, What Comes Next?

When AI, Ethics and Data Privacy Collide, What Comes Next?

AI is changing the world of HR. But new tools come with questions. To navigate data privacy, security, and ethics, it will be crucial for business leaders to understand how ethics and AI mesh, the challenges of implementing AI, and ways to make the most of AI tools.

Artificial intelligence (AI) is changing the way businesses do business. According to data from research firm Gartner, more than half of HR leaders are exploring the impact of generative AI, and 76 percent believe that if their organisation does not adopt AI tools in the next 12 to 24 months, they will lag behind industry leaders.

But even as organisations explore the advantages of AI-driven decision-making, they're increasingly aware of ethical issues around data privacy and data use.

In an ADP panel for its Insights in Action event, experts discussed what happens when privacy, data, and ethics collide. Moderated by Helena Almeida, vice president of managing counsel for ADP, the session featured Trey Causey, head of responsible AI and senior director of data science for Indeed; John Sumser, founder and principal analyst at HRExaminer.com; and Roger Dahlstrom, senior manager for GenAI Labs at Amazon Web Services.

These professionals explored the concept of ethics in AI, the challenges that come with implementing intelligent tools, and how businesses can make the most of AI.

Ethics in the age of artificial intelligence

Before getting into the details of AI implementation, the panel discusses ethics. To talk about it meaningfully, you first need to know what the term means and how it applies to artificial intelligence.

"When I think about ethics, on the left-hand side, you have morality, whether something is right or wrong," says Sumser. "And on the right-hand side, you have legality, which is how you know if something is compliant or not compliant. And the rest of it in the middle is this area called ethics."

Challenges in implementing AI

AI solutions offer a way for people in the HR field to improve their decision-making, but it has to be acknowledged that this kind of tech is not perfect.

Almeida explains that AI tools can make decisions easier, "but that doesn't mean these decisions are easy," she adds. While AI makes it possible to streamline data collection, correlation, and analysis operations, the solutions it provides aren't set in stone. Even the smartest machines still make mistakes.

The takeaway is that even as AI evolution accelerates, challenges remain for companies considering implementation. Three of the most common include:

Recognising the impact at scale

"One of the biggest things is that generative AI is transformative," Dahlstrom says. "I cannot remember a technology that has progressed from idea to production this quickly. This means we need to be able to make good decisions in the face of high ambiguity. We must be transparent, accountable, and humble. We must be willing to admit we don't know."

This is easier said than done. Many businesses have operational frameworks that shy away from admitting uncertainty. But with AI evolving faster than business processes can keep pace, organisations must be willing to adapt, even if it means learning from their mistakes.

Taking accountability for AI results

Accountability plays a key role in AI implementation but may be challenging to achieve. In many cases, issues with accountability stem from the fact that AI tools are complex and powerful enough that they seemingly operate on their own. As a result, businesses are inclined to write off issues with accuracy or reliability as faults in the system rather than problems they need to address.

Educating employees about proper use

Effective use of AI tools in HR means getting employees on board. According to a 2023 survey by Ernst & Young, however, staff are worried about the ethical implications of artificial intelligence. About 65 percent say they're anxious about not knowing how to use AI ethically.

C-suite leaders also need to draft, implement, and enforce clear policies around the use of AI. These policies should detail how data will be used, how data privacy will be maintained, and how employees can retain control of their personal data.

Making the most of AI tools

While AI is a technology framework, Dahlstrom makes it clear that ethics are inherently human.

"It's not a technology issue for the most part," he says. "Instead, we break things out in frameworks. Depending on the use case, you may pull different levers. But the outcome is human."

In practice, this means that making the most of AI tools starts with trust. And according to Sumser, "The pathway to trust begins with admitting the stakes."

Almeida agrees, noting that "ADP has a set of AI and data ethics that helps guide actions and responses."

Bottom line: Don't just say, "AI, AI, captain!"

Businesses are all in the same boat. AI has arrived, and organisations that don't get on board will be left behind. The ethical and privacy issues surrounding AI mean it's not enough to just deploy the tech. Instead, enterprises need to consider how ethics impact AI use and how privacy priorities help write roadmaps for effective implementation.

AI and ethics go hand-in-hand — make sure your business is ready to navigate the next generation of decision-making.

ADP Insights in Action graphic

Watch the full Insights in Action session on-demand to learn more about the intersection of AI, ethics and data privacy and check out other panels from the event on how data can ignite growth and progress and how generative AI will affect the workplace of tomorrow.

Related resources

guidebook

Practical Guide to Modern HR

FAQ

What is a workforce management software?

case study

Case Study on Aberdeen Asset Management and HR.net