Artificial Intelligence

  • 1.  Long Reads: Focusing on AI Governance

    Posted Jul 22, 2020 09:57:00 AM
    Lately there has been a fair amount of publication in academic, industry, and investigative journalism about the impact of AI and ML on the more mundane aspects of the impact of algorithms on our daily lives where algorithms shape or replace human decisions.  Here are three long reads that shape the discussion on the topic:

    Rabbit Hole (New York Times Investigative Journalism PodCast. 9 installments, 28 min. per episode).   A fascinating single case study of one typical person, as they relive and reflect on their 12,000 entry YouTube history over a multi-year period.  The journalists and the subject walk through how the YouTube recommender algorithm contributed to their radicalization, and subsequent de-radicalism.  The case study include an interview with the leader of YouTube digital media content, as well as AI engineers who left major software companies over the ethical dilemmas recommender systems cause.

    We Have Already Left the Genie Out of The Bottle (Tim O'Reilly, CEO O'Reilly Publications. 18 min. read)  In this blog post for the Rockefeller Foundation Tim O'Reilly expands on his view that we've already let AI and ML escape the grasp of governance by the simple fact that the tech industry continues to release single-minded AI operating as binary (cat, not cat) sorting machines, which he see as a far greater threat than the technology itself.

    <yt-formatted-string class="style-scope ytd-video-primary-info-renderer"></yt-formatted-string>
    "The Social Impact of Machine Learning and Artificial Intelligence in Society" (YouTube streamed live on May 20, 2020) Dr. Suresh Venkatasubramanian,  University of Utah, Department of Computing, delivers a 55 minute lecture on Algorithmic Fairness, which is a deep dive into the ethical challenges of AI and ML usage for automated decision making.  He presents Algorithmic Fairness as a new and needed discipline and lays the foundation for what he sees as needed work in the area. 



    ------------------------------
    Mark Yanalitis
    ------------------------------


  • 2.  RE: Long Reads: Focusing on AI Governance

    Posted Jul 23, 2020 07:57:00 AM
    That's interesting to see someone think we have no governance over AI. That in my opinion is relative to its use case. During its creation, maybe. But how it is used from an enterprise perspective, there is still a lot of room to build governance around. We are still very infant in its potential for the future.

    ------------------------------
    Sean Heide
    Research Analyst
    CSA
    ------------------------------



  • 3.  RE: Long Reads: Focusing on AI Governance

    Posted Jul 23, 2020 11:12:00 AM
    True, but then there are smashing developments like GPT-3 from OpenAI.  One the whole this has been a bone-aching societal problem.
    Foremost, our legal and regulatory frameworks are woefully inadequate and continue to weaken under the pressing desire for more money (through cost avoidance).  Equality and fairness are not "itches" that can be scratched," therefore not part of any AI/ML optimization.  

    When widespread commercial adoption of GPT-3 capability occurs, who is behind that optimization? Companies that want to drive down software developer costs, media production costs, language translation costs,  in the name of optimizing productivity and ultimately avoiding cost. Of all the optimizations, this is the most insidious of them all because it takes ground typically held by people who creatively produce - a unique aspect of humanity. Do you think that GPT-13 will be better at its core capability? Heck yea. At 175MM parameters, and a conservative assumption of 25% growth per year, that is a whopping 393 MM additional parameters in the model in 10 years (assuming its not exponential growth and exponential capability).  Here is the enabler, cloud elastic computing can scale the model - meaning there might be no upper limit to what GPT-(x) can do. 

    The modern challenges is that as well-heeled investors seek AI optimizations that increase revenue in the short term, single-minded ("no brakes') AI adoption has long term impacts.  Short-term style thinking is all about optimization, but the problems that style of thinking creates is a stew of unintended consequences.   On the whole we (the big We - society) are not being cautious.  We are being exuberant when it is not warranted.  For more background reference on equitable distribution of AI benefit read the Oxford Study

    For now, I fall into the "not enough GRC" when it comes to AI applications.  My mind will change over time, and I am open to changing positions, as I take in more information.

    Mark Y.

    ------------------------------
    Mark Yanalitis
    ------------------------------