Welcome to the first blog post covering our bi-weekly conversations about ethics related to today’s current developments in AI!
The Coffee, Ethics and AI Open Coffee Club has been meeting for over two months at this point, and we’ve covered topics from: Tesla’s autopilot, to AI in healthcare. In one of our latest sessions, we discussed the first attempt by a major regulator to govern AI: the EU AI Act.
The EU AI Act is proposed legislation in the European Union, which continues to receive feedback in the European Parliament. While the AI Act includes many nuances and conditions (over 100 pages!), we chose to discuss the core method this legislation uses to moderate AI technology: risk categorization. The Act assigns 3 different risk levels to AI technologies which determine the level of governance: unacceptable risk, high risk, and unregulated.
Things discussed: Is this the right approach?
Coffee club participants with experience in the medical and linguistic fields shared ways they had experienced bias and poor management of data science and machine learning applications - agreeing, in general, that legislation and standard governance was definitely a move in the right direction.
However, when discussing whether this was the right approach to begin navigating this space, the group identified a few challenges: a.) legislation would need to be officially amended for every new type of technology or previously unidentified risk, b.) change would be slow because risk assessment depends in part on disclosure by private corporations, and c.) further research would need to be done to apply guidance for specific industries.
What do you think?
Credit to NLP references: Chung-Fan Tsai!
Follow us on Meetup to learn about our next coffee chat! https://www.meetup.com/coffee-ethics-ai/