Which issue arises when AI models perform differently across groups because of data biases?

Prepare for the AI Prompt Engineering Test with detailed flashcards and insightful questions. Master key Machine Learning and NLP concepts with explanations for every query. Ace your exam!

Multiple Choice

Which issue arises when AI models perform differently across groups because of data biases?

Explanation:
Bias in AI is the issue here. When data used to train a model contain biases or aren’t representative of all groups, the model learns patterns that perform better for some groups and worse for others. That leads to different accuracy or error rates across groups, even if the overall performance looks good. Fairness in AI is about reducing or eliminating those disparities, but the underlying problem described is bias in AI. Transparency and privacy concern other aspects—how decisions are explained or what data are kept private—rather than the unequal performance caused by biased data.

Bias in AI is the issue here. When data used to train a model contain biases or aren’t representative of all groups, the model learns patterns that perform better for some groups and worse for others. That leads to different accuracy or error rates across groups, even if the overall performance looks good. Fairness in AI is about reducing or eliminating those disparities, but the underlying problem described is bias in AI. Transparency and privacy concern other aspects—how decisions are explained or what data are kept private—rather than the unequal performance caused by biased data.

Subscribe

Get the latest from Examzify

You can unsubscribe at any time. Read our privacy policy