• 2024 an AI Year in Review
    Dec 30 2024
    🎙️ Lunchtime BABLing: 2024 - An AI Year in Review 🎙️ Join Shea Brown (CEO, BABL AI), Jeffery Recker (COO, BABL AI), and Bryan Ilg (CSO, BABL AI) as they reflect on an extraordinary year in AI! In this final episode of the year, the trio dives into: 🌟 The rapid growth of Responsible AI and algorithmic auditing in 2024. 📈 How large language models are redefining audits and operational workflows. 🌍 The global wave of AI regulations, including the EU AI Act, Colorado AI Act, and emerging laws worldwide. 📚 The rise of AI literacy and the "race for competency" in businesses and society. 🤖 Exciting (and risky!) trends like AI agents and their potential for transformation in 2025. Jeffery also shares an exciting update about his free online course, Introduction to Responsible AI, available until January 13th, 2025. Don’t miss this opportunity to earn a certification badge and join a live Q&A session! 🎉 Looking Ahead to 2025 What’s next for AI governance, standards like ISO 42001, and the evolving role of education in shaping the future of AI? The team shares predictions, insights, and hopes for the year ahead. 📌 Key Takeaways: AI is maturing rapidly, with businesses adopting governance frameworks and grappling with new regulations. Education and competency-building are essential to navigating the changing AI landscape. The global regulatory response is reshaping how AI is developed, deployed, and audited. Link to Raymon Sun's Techie Ray Global AI Regulation Tracker: https://www.techieray.com/GlobalAIRegulationTracker 💡 Don’t miss this thought-provoking recap of 2024 and the exciting roadmap for 2025!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    40 mins
  • An Interview with Aleksandr Tiulkanov
    Dec 16 2024
    In this episode, BABL AI CEO Dr. Shea Brown interviews Aleksandr Tiulkanov, an expert in AI compliance and digital policy. Aleksandr shares his fascinating journey from being a commercial contracts lawyer to becoming a leader in AI policy at Deloitte and the Council of Europe. 🚀 🔍 What’s in this episode? The transition from legal tech to AI compliance. Key differences between the Council of Europe’s Framework Convention on AI and the EU AI Act. How the EU AI Act fits into Europe’s product safety legislation. The challenges and confusion around conformity assessments and AI literacy requirements. Insights into Aleksandr’s courses designed for governance, risk, and compliance professionals. 🛠️ Aleksandr also dives into practical advice for preparing for the EU AI Act, even in the absence of finalized standards, and the role of frameworks like ISO 42,001. 📚 Learn more about Aleksandr’s courses: https://aia.tiulkanov.info 🤝 Follow Aleksandr on LinkedIn: https://www.linkedin.com/in/tyulkanov/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    43 mins
  • The future of jobs with AI
    Dec 2 2024
    In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker and Bryan Ilg to tackle one of the most pressing questions of our time: How will AI impact the future of work? From fears of job displacement to the rise of entirely new roles, the trio explores: 🔹 How AI will reshape industries and automate parts of our jobs. 🔹 The importance of upskilling to stay competitive in an AI-driven world. 🔹 Emerging career paths in responsible AI, compliance, and risk management. 🔹 The delicate balance between technological disruption and human creativity. 📌 Whether you're a seasoned professional, a student planning your career, or just curious about the future, this episode has something for you. 👉 Don’t miss this insightful conversation about navigating the rapidly changing job market and preparing for a future where AI is a part of nearly every role. 🎧 Listen on your favorite podcast platform or watch the full discussion here. Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI trends and insights!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    35 mins
  • How will a Trump Presidency Impact AI Regulation
    Nov 18 2024
    🎙️ Lunchtime BABLing Podcast: What Will a Trump Presidency Mean for AI Regulations? In this thought-provoking episode, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg to explore the potential impact of a Trump presidency on the landscape of AI regulation. 🚨🤖 Key topics include: Federal deregulation and the push for state-level AI governance. The potential repeal of Biden's executive order on AI. Implications for organizations navigating a fragmented compliance framework. The role of global AI policies, such as the EU AI Act, in shaping U.S. corporate strategies. How deregulation might affect innovation, litigation, and risk management in AI development. This is NOT a political podcast—we focus solely on the implications for AI governance and the tech landscape in the U.S. and beyond. Whether you're an industry professional, policymaker, or tech enthusiast, this episode offers essential insights into the evolving world of AI regulation.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    37 mins
  • A BABL Deep Dive
    Nov 4 2024
    Welcome to a special Lunchtime BABLing episode, BABL Deep Dive, hosted by BABL AI CEO Dr. Shea Brown and Chief Sales Officer Brian Ilg. This in-depth discussion explores the fundamentals and nuances of AI assurance—what it is, why it's crucial for modern enterprises, and how it works in practice. Dr. Brown breaks down the concept of AI assurance, highlighting its role in mitigating risks, ensuring regulatory compliance, and building trust with stakeholders. Brian Ilg shares key insights from his conversations with clients, addressing common questions and challenges that arise when organizations seek to audit and assure their AI systems. This episode features a detailed presentation from a recent risk conference, offering a behind-the-scenes look at how BABL AI conducts independent AI audits and assurance engagements. If you're a current or prospective client, an executive curious about AI compliance, or someone exploring careers in AI governance, this episode is packed with valuable information on frameworks, criteria, and best practices for AI risk management. Watch now to learn how AI assurance can protect your organization from potential pitfalls and enhance your reputation as a responsible, forward-thinking entity in the age of AI!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    51 mins
  • AI Literacy Requirements of the EU AI Act
    Oct 21 2024
    👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". 📚 Courses Mentioned: 1️⃣ AI Literacy Requirements Course: https://courses.babl.ai/p/ai-literacy-for-eu-ai-act-general-workforce 2️⃣ EU AI Act - Conformity Requirements for High-Risk AI Systems Course: https://courses.babl.ai/p/eu-ai-act-conformity-requirements-for-high-risk-ai-systems 3️⃣ EU AI Act - Quality Management System Certification: https://courses.babl.ai/p/eu-ai-act-quality-management-system-oversight-certification 4️⃣ BABL AI Course Catalog: https://babl.ai/courses/ 🔗 Follow us for more: https://linktr.ee/babl.ai In this episode of Lunchtime BABLing, CEO Dr. Shea Brown dives into the "AI Literacy Requirements of the EU AI Act," focusing on the upcoming compliance obligations set to take effect on February 2, 2025. Dr. Brown explains the significance of Article 4 and discusses what "AI literacy" means for companies that provide or deploy AI systems, offering practical insights into how organizations can meet these new regulatory requirements. Throughout the episode, Dr. Brown covers: AI literacy obligations for providers and deployers under the EU AI Act. The importance of AI literacy in ensuring compliance. An overview of BABL AI’s upcoming courses, including the AI Literacy Training for the general workforce, launching November 4.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    20 mins
  • AI Frenzy: Will It Really Replace Our Jobs?
    Oct 7 2024
    In this episode of Lunchtime BABLing, hosted by Dr. Shea Brown, CEO of BABL AI, we're joined by frequent guest Jeffery Recker, Co-Founder and Chief Operating Officer of BABL AI. Together, they dive into an interesting question in the AI world today: Will AI really replace our jobs? Drawing insights from a recent interview with MIT economist Daron Acemoglu, Shea and Jeffery discuss the projected economic impact of AI and what they believe the hype surrounding AI-driven job loss will actually look like. With only 5% of jobs expected to be heavily impacted by AI, is the AI revolution really what everyone thinks it is? They explore themes such as the overcorrection in AI investment, the role of responsible AI governance, and how strategic implementation of AI can create competitive advantages for companies. Tune in for an honest and insightful conversation on what AI will mean for the future of work, the economy, and beyond. If you enjoy this episode, don't forget to like and subscribe for more discussions on AI, ethics, and technology!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    17 mins
  • How NIST Might Help Deloitte With the FTC
    Sep 23 2024
    Welcome back to another insightful episode of Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker dive into a fascinating discussion on how the NIST AI Risk Management Framework could play a crucial role in guiding companies like Deloitte through Federal Trade Commission (FTC) investigations. In this episode, Shea and Jeffery on a recent complaint filed against Deloitte regarding its automated decision system for Medicaid eligibility in Texas, and how adherence to established frameworks could have mitigated the issues at hand. 📍 Topics discussed: Deloitte’s Medicaid eligibility system in Texas The role of the FTC and the NIST AI Risk Management Framework How AI governance can safeguard against unintentional harm Why proactive risk management is key, even for non-AI systems What companies can learn from this case to improve compliance and oversight Tune in now and stay ahead of the curve! 🔊✨ 👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
    Show More Show Less
    32 mins