Episodes

  • The Role of Experts in AI Regulation with Dr. Richard Saldanha, Founding Member of IST's AI Special Interest Group, UK | EP 03
    Nov 7 2024

    In this episode of the Responsible AI Report, Patrick and Dr. Richard Saldanha discuss the EU's AI Code of Conduct and its collaborative approach to AI governance. They explore the importance of adaptability in regulations, the balance between innovation and safety, and the need for qualified personnel in regulatory bodies. Richard emphasizes the significance of a principles-based approach and the role of collaboration among stakeholders in shaping effective AI regulations.

    Takeaways

    • The EU AI Act aims to create a global model for AI regulations.
    • Collaboration between academia, industry, and civil society is crucial for effective AI governance.
    • A principles-based approach allows for flexibility in AI regulation.
    • Regulators should hire individuals with a strong understanding of technology.
    • Balancing regulation and innovation requires pragmatism from all parties involved.
    • A supportive regulatory environment can enhance technological development.
    • Finding consensus among diverse stakeholders can be challenging.
    • The UK aims to align with the EU AI Act while maintaining flexibility.
    • Professional accreditation in AI skills is essential for industry growth.


    Learn more by visiting:
    1. Referenced article: https://www.ainews.com/p/eu-gathers-experts-to-draft-ai-code-of-practice-for-general-ai-models

    2. EU AI Act 2024/1689: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689

    3. UK Automated Vehicles Act 2024: https://www.legislation.gov.uk/ukpga/2024/10/contents/enacted

    4. Richard's Queen Mary University of London profile: https://www.qmul.ac.uk/sef/staff/richardsaldanha.html

    5. Richard's Academic Speakers Bureau profile: https://www.academicspeakersbureau.com/speakers/richard-saldanha

    6. The UK Institute of Science and Technology (IST) website: https://istonline.org.uk/

    7. IST AI professional accreditation:
    https://istonline.org.uk/professional-registration/registered-artificial-intelligence-practitioners/

    8. IST AI training: https://istonline.org.uk/ist-artificial-intelligence-training/

    Dr. Richard Saldanha is one of the founder members of the Institute of Science and Technology's Artificial Intelligence Special Interest Group in the UK. He is actively involved in the development of the Institute's AI professional accreditation as well as host of its online AI Seminar Series. Richard is a Visiting Lecturer at Queen Mary University of London where he teaches Machine Learning in Finance on the Master’s Degree Programme in the School of Economics and Finance. He is also an Industrial Collaborator in the AI for Control Problems Project at The Alan Turing Institute. Richard's earlier career was in quantitative finance (risk, trading and investments) gaining over two decades of experience working for institutions in the City of London. He is still actively engaged in quantitative finance via Oxquant, a consulting firm he co-heads with Dr Drago Indjic. Richard attended Oriel College, University of Oxford, and holds a doctorate (DPhil) in graph theory and multivariate analysis. He is a Fellow and Chartered Statistician of the Royal Statistical Society; a Science Council Chartered Scientist; a Fellow and Advanced Practitioner in Artificial Intelligence of the Institute of Science and Technology; a Member of the Institution of Engineering and Technology; and has recently joined the Responsibl

    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    21 mins
  • The Intersection of AI and Healthcare with Dr. Jolley-Paige and Caraline Bruzinski, mpathic | EP 02
    Oct 24 2024

    In this episode of the Responsible AI Report, Patrick speaks with Caroline Brzezinski and Dr. Amber Jolley-Paige from mpathic about the intersection of AI and healthcare. They discuss the importance of measuring AI accuracy, the need for standardized testing, acceptable error rates in medical AI, and current trends in AI adoption within the healthcare sector. The conversation emphasizes the critical role of human oversight and expert involvement in ensuring the safety and efficacy of AI tools in medical applications.

    Takeaways

    • AI in healthcare requires domain-specific validation.
    • Human oversight is essential for AI accuracy.
    • Standardized testing for medical AI is crucial.
    • Acceptable error rates depend on potential harm.
    • Different healthcare sectors adopt AI at varying rates.
    • Generative AI is just one aspect of healthcare AI.
    • AI tools must be tailored to specific medical needs.
    • Experts should guide AI development and deployment.
    • The healthcare industry is still figuring out best practices.
    • AI advancements necessitate ongoing regulatory discussions.

    Learn more by visiting:
    https://mpathic.ai/
    https://www.linkedin.com/in/amber-jolley-paige-ph-d-72041b46/
    https://www.linkedin.com/in/caraline-7b22588b/

    Dr. Jolley is a licensed professional counselor, researcher, and educator with over a decade of experience in the mental health field. As the Vice President of Clinical Product and a founding team member at mpathic, she leads a team that utilizes an evidence-based labeling system to advance natural language processing technologies. Dr. Jolley leverages her extensive clinical, research, and teaching background to develop a conversation and insights engine, providing individuals and organizations with actionable insights for enhanced understanding.

    Caraline Bruzinski is a Senior Machine Learning Engineer at mpathic, where she models clinical trial data from therapist-client sessions with a focus on measuring empathy and therapist-patient conversational outcomes. Caraline specializes in refining models to achieve higher accuracy and reliability, developing custom ML models tailored to address specific clinical setting challenges, and conducting statistical analysis to enhance the accuracy and fairness of machine learning outcomes. With a Master’s degree in Computer Science, specifically focusing on AI/ML, from New York University and a background in data engineering, she brings extensive experience from her previous roles, including as Tech Lead at Glossier Inc. There, she developed a recommendation system that boosted sales by over $2M annually.

    The Responsible AI Report is produced by the Responsible AI Institute.

    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    18 mins
  • Towards a Global & Cooperative AI Future with Renata Dwan, Special Advisor to the UN Secretary General's Envoy on Technology | EP 01
    Oct 10 2024

    In episode 01 of the Responsible AI Report, Renata Dwan discusses the critical need for global governance of artificial intelligence (AI) and the challenges that arise from differing national perspectives. She emphasizes the importance of collaboration, equity, and transparency in developing effective AI governance frameworks. Dwan outlines strategies for achieving consensus among nations and highlights the role of the UN in facilitating dialogue and cooperation. The discussion also touches on the implications of AI for society and the necessity of addressing market failures and ensuring equitable distribution of AI benefits.

    Takeaways

    • Global AI governance is essential due to the borderless nature of technology.
    • There is a significant asymmetry in AI development and access.
    • A global dialogue is necessary for effective AI governance.
    • Equity in AI benefits distribution is crucial for stability.
    • Trust can be built through collaborative efforts in AI governance.
    • The UN's role is to maintain international order amidst AI advancements.
    • Market failures in AI need to be addressed proactively.
    • Representation in AI governance discussions is lacking and needs improvement.
    • The lessons from climate change governance can inform AI governance.
    • Capacity building is vital for equitable participation in AI development.

    Learn more by visiting:
    https://www.un.org/en/
    https://www.un.org/techenvoy/
    https://www.un.org/techenvoy/global-digital-compact

    Renata Dwan is Special Adviser to the UN Secretary-General’s Envoy on Technology where she led support to the elaboration of the Global Digital Compact approved by heads of state at the UN Summit of the Future. Renata has driven multilateral cooperation initiatives for over 25 years within and outside the
    UN. As Director of the United Nations Institute for Disarmament Research (UNIDIR) she led initiatives on digital technology governance and arms control. She drove major UN-wide initiatives on UN reform and partnerships, and dialogue on the responsible use of technologies inUN peace operations. Previously, Renata was Deputy Director of Chatham House, the Royal Institute of International Affairs where she oversaw the Institute’s research agenda and digital initiatives. She was
    Programme Director at Stockholm International Peace Research Institute (SIPRI) and visiting fellow to the EU Institute for Security Studies. She received her B.A, M.Phil and D.Phil in International Relations from Oxford University, UK. Renata has published widely on international policy and security issues. She is an Irish national.

    The Responsible AI Report is produced by the Responsible AI Institute.

    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    25 mins
  • Trailer: Welcome to the Responsible AI Report!
    Sep 25 2024

    The Responsible AI Report is a brand new podcast, brought to you by the Responsible AI Institute. Each week we will bring you the latest news and trends happening in the responsible AI ecosystem with leading industry experts. Whether it's unpacking promising progress, pressing dilemmas, or regulatory updates, our trailblazing guests will spotlight emerging innovations through a practical lens, helping to implement and advance AI responsibly. We are excited to have you join us!


    Support the show

    Visit our website at responsible.ai


    Show More Show Less
    1 min