How to Approach the Ethics of AI in Learning and Development

A laptop on a wooden table displays the OpenAI ChatGPT webpage titled “ChatGPT: Optimizing Language Models for Dialogue” with purple and green text. A blurred sandwich menu and a white coffee cup sit in the background.

Artificial intelligence is showing up everywhere in today’s workplaces. From drafting member emails to outlining learning modules, AI offers speed and efficiency, something especially valuable when budgets are tight and staff are stretched thin. But with all that convenience comes an important question: where is this content really coming from? Because many AI tools are trained on huge amounts of internet data, leaders have to stop and ask:

  • Whose voices are being included?
  • Whose voices are missing?
  • What happens when weak or misleading information shapes the results?

As associations and non-profits bring AI into learning and development, the way content is sourced, reviewed, and shared will make all the difference. 

Why Does Data Quality Matter in AI?

You’ve probably heard the saying, “garbage in, garbage out.” That idea applies to AI just as much today as it did in the early days of computers. AI systems are only as strong as the information they learn from, and the prompts we give them. If the data is biased or outdated, or if the prompt is unclear, the results won’t be very reliable.

What is training data in AI?

Training data is the information used to teach an AI system how to recognize patterns and make predictions. It can include:

  • Research papers and peer-reviewed journals
  • Open-source articles and databases
  • Blogs and opinion pieces
  • Social media posts and user-generated content

The challenge? High-quality, peer-reviewed research can sit right next to weak or misleading information. As researchers put it in The Effects of Data Quality on Machine Learning Performance, “the performance of AI-enhanced systems in practice is proven to be bounded by the quality of the underlying training data.”

In other words: what goes in shapes what comes out. If AI is trained on a mix of strong and weak sources, the results can blur the line between trustworthy expertise and shaky information.

Why do prompts matter in AI?

The way you phrase a request makes a difference in the answer you get. A broad prompt will usually give you something generic and surface-level. But if you’re more specific, the tool has stronger direction and produces content that’s far more relevant.

Research backs this up. A recent study on Prompt Engineering and the Quality of AI-Driven Feedback found that well-designed prompts led to consistently higher-quality feedback in teacher training programs. Another paper on Unleashing the Potential of Prompt Engineering for Large Language Models noted that prompts with clear structure and context improved both the accuracy and usefulness of AI outputs.

Good prompts won’t completely eliminate the risk of bias, but they can steer AI toward results that are more aligned with your learners, your goals, and your values.

How should you review AI outputs with a critical lens?

Even with strong prompts and solid data, your work isn’t done. AI can sound confident while still being wrong. And with 92% of companies planning to increase their AI investments over the next three years (McKinsey & Company), the pressure to get data quality right will only grow. That’s why it’s so important to look at AI outputs with a critical eye. Ask yourself:

  • Are the facts accurate?
  • Is there hidden bias in the language or examples?
  • Does the output align with your mission and values?

Taking a few minutes to review results through this lens can be the difference between content that builds trust and content that undermines it.

Common AI Questions in L&D

AI often gets attention for its speed, innovation, and efficiency. But for organizations who have responsibilities and reputations to uphold, the bigger question isn’t just “Is it useful? it’s “How do we use it responsibly?” The real focus should be on how AI can support learning, strengthen member trust, and protect your organization’s credibility.

Here are some of the key questions leaders are starting to ask:

   1. Is it okay to use AI to draft training materials?

Yes – AI can be a helpful tool for drafting training materials, as long as it’s used wisely. The appeal is clear: faster drafts free up staff to spend more time advancing the mission. But speed alone isn’t enough. AI should be treated as a starting point, with human expertise shaping the final product so it feels meaningful, accurate, and aligned with the unique voice of the community.

According to Foundation Magazine, the payoff can be significant:

  • Up to 30% lower administrative costs 
  • Up to 40% higher productivity

Still, challenges remain. Less than 2% of foundation grants currently support nonprofit technology adoption, which leaves many associations struggling to access the funding needed to fully benefit from these tools.

   2. Can AI reflect our mission and values without human oversight?

For associations and non-profits, every piece of training material reflects their mission and values. AI can’t fully capture these priorities on its own, but it can be guided. When organizations train staff to use AI thoughtfully they can create learning materials that are both effective and true to their purpose.

   3. Do employees need training on AI tools?

Absolutely, and they’re asking for it. As AI becomes part of everyday work, staff don’t just want to see the tools rolled out. They want the skills to use them well. A recent McKinsey & Company survey found that nearly half of employees believe formal training is the best way to build confidence and increase adoption. Many also want hands-on access through pilots or beta programs so they can learn by doing.

   4. Where does AI fall short in L&D?

AI is powerful at summarizing, outlining, and suggesting structures. But it has clear limits. It can also create generic content that doesn’t capture the culture or context that associations need in their learning materials.

For example, an AI-made list of “leadership skills” might look fine, but it won’t reflect the real challenges members face in a specific field.

   5. How do we verify and fact-check AI outputs?

Verification is one of the biggest concerns with AI. If the tool doesn’t show its sources, how can leaders trust the information? A survey shared in The Conversation found that 66% of employees have used AI outputs without checking them first, a risky habit in areas where accuracy is essential. It’s no surprise that 56% admitted they’ve made mistakes in their work because of AI.

Forward-thinking associations are tackling this by putting verification steps in place, such as:

  • Sending AI-assisted drafts to SMEs
  • Creating peer-review systems
  • Cross-checking results with trusted sources

These practices keep the efficiency of AI while making sure the final work is accurate and credible.

Final Thoughts

AI is here to stay, and for associations and non-profits it brings both exciting opportunities and serious responsibilities. Strong data, clear prompts, and careful human review are what keep AI from producing subpar content.

At the end of the day, AI should never replace the insight that people bring. Instead, it works best as a partner. A tool that, when used responsibly, helps organizations deliver training that is both accurate and impactful.

Free Resource: 7 Steps to Develop an Engaging Course Curriculum

7 Steps to Develop Engaging Course CurriculumWant to go deeper into creating meaningful learning experiences? Download our free guide, 7 Steps to Develop an Engaging Course Curriculum. It’s filled with strategies you can apply right away to strengthen your training programs.

About the author 

Vraya Forrest

>
gdpr-logo
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.