Original Message:
Sent: 1/11/2024 11:11:00 AM
From: Eric Cohen
Subject: RE: Responsible? AI
Thank you for sharing the article. That was a pretty masterful way of presenting their results!
I see the Netflix video "Coded Bias" from 2020 on this topic.
As you note, this is not an issue confined to GenAI. I was first made aware of issues here fifteen years ago, with a humorous but pointed take on the subject from 2009. There was a satirical look at soulless corporate America on a situation comedy called "Better Off Ted". The fictional "Veridian Dynamics" was a hotbed of corporate insensitivity and technological bias. One episode, entitled "Racial Insensitivity", revolves around the installation of a new state-of-the-art sensor system that fails to detect people of color, leading to absurd and racially insensitive solutions being proposed by the company executives instead of simply replacing the system.
------------------------------
Eric Cohen
Proprietor
Cohen Computer Consulting
------------------------------
Original Message:
Sent: Jan 10, 2024 10:18:24 AM
From: Sai Honig
Subject: Responsible? AI
I've been speaking about this for years. Ever since I have been harmed by technology and not just AI.
AI is especially harmful to people of color, disabled and neurodivergent.
here's an article I have posted https://www.bloomberg.com/graphics/2023-generative-ai-bias/
I'm currently working in the space of Responsible AI. I would join CSA AI working group but times don't work for those outside the US.
Happy to discuss this further.
------------------------------
Sai Honig, CCSP, CISSP
Wellington, New Zealand
Original Message:
Sent: Jan 09, 2024 07:47:22 PM
From: Saurav Bhattacharya
Subject: Responsible? AI
Lately, I've been closely observing the trend of Responsible AI spearheaded by various companies. It seems the government is somewhat out of touch with the rapid advancements in this domain. Surprisingly, even IT professionals seem only partially aware of the strides in AI technology and the associated risks. While I'm not suggesting an apocalyptic scenario, the potential for significant harm to humanity, especially to the current generation on the brink of Artificial General Intelligence (AGI), cannot be overlooked. I believe AGI is closer to reality than many assume, though I'll refrain from delving into details.
This brings us to a crucial question: what are the current initiatives to ensure AI is used responsibly? How are we educating the masses about AI's benefits and risks? Humanity has endured through its imperfections for ages, but with each significant technological leap, we also forge greater risks, often through misuse. While human innovation and potential are undeniably remarkable and worthy of celebration, recklessly releasing technologies like ChatGPT into the world without proper regulatory frameworks seems like a risky venture, one that could stem from negligence or worse, intentional harm.
------------------------------
Saurav Bhattacharya
Software Engineer
Microsoft
------------------------------