In recent years, AI ethicists have had a tough job. The engineers developing generative AI tools have been racing ahead, competing with one another to create models of even more breathtaking abilities, leaving both regulators and ethicists to comment on what’s already been done.

One of the people working to shift this paradigm is Alice Xiang, global head of AI ethics at Sony. Xiang has worked to create an ethics-first process in AI development within Sony and in the larger AI community. She spoke to IEEE Spectrum about starting with the data and whether Sony, with half its business in content creation, could play a role in building a new kind of generative AI.

Alice Xiang on…

  1. Responsible data collection
  2. Her work at Sony
  3. The impact of new AI regulations
  4. Creator-centric generative AI

    What’s the origin of your work on responsible data collection? And in that work, why have you focused specifically on computer vision?

Alice Xiang: In recent years, there has been a growing awareness of the importance of looking at AI development in terms of entire life cycle, and not just thinking about AI ethics issues at the endpoint. And that’s something we see in practice as well, when we’re doing AI ethics evaluations within our company: How many AI ethics issues are really hard to address if you’re just looking at things at the end. A lot of issues are rooted in the data-collection process—issues like consent, privacy, fairness, intellectual property. And a lot of AI researchers are not well equipped to think about these issues. It’s not something that was necessarily in their curricula when they were in school. Read more