- Whose voices are being included?
- Whose voices are missing?
- What happens when weak or misleading information shapes the results?
Why Does Data Quality Matter in AI?
You’ve probably heard the saying, “garbage in, garbage out.” That idea applies to AI just as much today as it did in the early days of computers. AI systems are only as strong as the information they learn from, and the prompts we give them. If the data is biased or outdated, or if the prompt is unclear, the results won’t be very reliable.What is training data in AI?
Training data is the information used to teach an AI system how to recognize patterns and make predictions. It can include:- Research papers and peer-reviewed journals
- Open-source articles and databases
- Blogs and opinion pieces
- Social media posts and user-generated content
Why do prompts matter in AI?
The way you phrase a request makes a difference in the answer you get. A broad prompt will usually give you something generic and surface-level. But if you’re more specific, the tool has stronger direction and produces content that’s far more relevant. Research backs this up. A recent study on Prompt Engineering and the Quality of AI-Driven Feedback found that well-designed prompts led to consistently higher-quality feedback in teacher training programs. Another paper on Unleashing the Potential of Prompt Engineering for Large Language Models noted that prompts with clear structure and context improved both the accuracy and usefulness of AI outputs. Good prompts won’t completely eliminate the risk of bias, but they can steer AI toward results that are more aligned with your learners, your goals, and your values.How should you review AI outputs with a critical lens?
Even with strong prompts and solid data, your work isn’t done. AI can sound confident while still being wrong. And with 92% of companies planning to increase their AI investments over the next three years (McKinsey & Company), the pressure to get data quality right will only grow. That’s why it’s so important to look at AI outputs with a critical eye. Ask yourself:- Are the facts accurate?
- Is there hidden bias in the language or examples?
- Does the output align with your mission and values?
Common AI Questions in L&D
AI often gets attention for its speed, innovation, and efficiency. But for organizations who have responsibilities and reputations to uphold, the bigger question isn’t just “Is it useful? it’s “How do we use it responsibly?” The real focus should be on how AI can support learning, strengthen member trust, and protect your organization’s credibility. Here are some of the key questions leaders are starting to ask:1. Is it okay to use AI to draft training materials?
Yes - AI can be a helpful tool for drafting training materials, as long as it’s used wisely. The appeal is clear: faster drafts free up staff to spend more time advancing the mission. But speed alone isn’t enough. AI should be treated as a starting point, with human expertise shaping the final product so it feels meaningful, accurate, and aligned with the unique voice of the community. According to Foundation Magazine, the payoff can be significant:- Up to 30% lower administrative costs
- Up to 40% higher productivity
2. Can AI reflect our mission and values without human oversight?
For associations and non-profits, every piece of training material reflects their mission and values. AI can’t fully capture these priorities on its own, but it can be guided. When organizations train staff to use AI thoughtfully they can create learning materials that are both effective and true to their purpose.3. Do employees need training on AI tools?
Absolutely, and they’re asking for it. As AI becomes part of everyday work, staff don’t just want to see the tools rolled out. They want the skills to use them well. A recent McKinsey & Company survey found that nearly half of employees believe formal training is the best way to build confidence and increase adoption. Many also want hands-on access through pilots or beta programs so they can learn by doing.4. Where does AI fall short in L&D?
AI is powerful at summarizing, outlining, and suggesting structures. But it has clear limits. It can also create generic content that doesn’t capture the culture or context that associations need in their learning materials. For example, an AI-made list of “leadership skills” might look fine, but it won’t reflect the real challenges members face in a specific field.5. How do we verify and fact-check AI outputs?
Verification is one of the biggest concerns with AI. If the tool doesn’t show its sources, how can leaders trust the information? A survey shared in The Conversation found that 66% of employees have used AI outputs without checking them first, a risky habit in areas where accuracy is essential. It’s no surprise that 56% admitted they’ve made mistakes in their work because of AI. Forward-thinking associations are tackling this by putting verification steps in place, such as:- Sending AI-assisted drafts to SMEs
- Creating peer-review systems
- Cross-checking results with trusted sources