IBM has come up with five pillars to address this issue, kind of summarizing the idea of responsible AI. That is explain ability, transparency, robustness, privacy and fairness.
Reflection on AI issues, concerns, and ethics
AI as a tool and AI as a decider.
AI fundamentally learns from input data. People believe that AI will make fair and just decisions, but I have doubts about it. AI is likely to be heavily biased. There is a famous example: in a situation where autonomous driving AI has to choose between hitting a child or an elderly person, which one will the AI choose to kill? People say that philosophical and ethical considerations are necessary to answer this question. What I want to say is that even the deliberation itself is likely to be biased. The source of this data is mostly English, as most existing language data is in English. The structure of a particular language easily directs thinking in a specific direction. Can an AI trained in specific languages represent all of humanity? Similarly, generative AIs that express opinions also feel cautiously biased. The AI's responses are likely to be somewhat biased. I wonder how much autonomy should be granted to AI.
'Coursework > Introduction to AI : coursera' 카테고리의 다른 글
23.06.08 Day3 (0) | 2023.06.08 |
---|---|
23.06.06 Day2 (0) | 2023.06.06 |
23.06.05 Day1 (0) | 2023.06.06 |