Presented by Babl AI, this podcast discusses all issues related to algorithmic bias, algorithmic auditing, algorithmic governance, and the ethics of artificial intelligence and autonomous systems.
Mon, April 07, 2025
In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown dives deep into one of the most pressing questions in AI governance today: how do we ensure the safety of Large Language Models (LLMs)? With new regulations like the EU AI Act, Colorado’s AI law, and emerging state-level requirements in places like California and New York, organizations developing or deploying LLM-powered systems face increasing pressure to evaluate risk, ensure compliance, and document everything. 🎯 What you'll learn: Why evaluations are essential for mitigating risk and supporting compliance How to adopt a socio-technical mindset and think in terms of parameter spaces What auditors (like BABL AI) look for when assessing LLM-powered systems A practical, first-principles approach to building and documenting LLM test suites How to connect risk assessments to specific LLM behaviors and evaluations The importance of contextualizing evaluations to your use case—not just relying on generic benchmarks Shea also introduces BABL AI’s CIDA framework (Context, Input, Decision, Action) and shows how it forms the foundation for meaningful risk analysis and test coverage. Whether you're an AI developer, auditor, policymaker, or just trying to keep up with fast-moving AI regulations, this episode is packed with insights you can use right now. 📌 Don’t wait for a perfect standard to tell you what to do—learn how to build a solid, use-case-driven evaluation strategy today.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, March 31, 2025
What does it really mean for AI to be explainable? Can we trust AI systems to tell us why they do what they do—and should the average person even care? In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by regular guests Jeffery Recker and Bryan Ilg to unpack the messy world of AI explainability—and why it matters more than you might think. From recommender systems to large language models, we explore: 🔍 The difference between explainability and interpretability -Why even humans struggle to explain their decisions -What should be considered a “good enough” explanation -The importance of stakeholder context in defining "useful" explanations -Why AI literacy and trust go hand-in-hand -How concepts from cybersecurity, like zero trust, could inform responsible AI oversight Plus, hear about the latest report from the Center for Security and Emerging Technology calling for stronger explainability standards, and what it means for AI developers, regulators, and everyday users. Mentioned in this episode: 🔗 Link to BABL AI's Article: https://babl.ai/report-finds-gaps-in-ai-explainability-testing-calls-for-stronger-evaluation-standards/ 🔗 Link to "Putting Explainable AI to the Test" paper: https://cset.georgetown.edu/publication/putting-explainable-ai-to-the-test-a-critical-look-at-ai-evaluation-approaches/?utm_source=ai-week-in-review.beehiiv.com&utm_medium=referral&utm_campaign=ai-week-in-review-3-8-25 🔗 Link to BABL AI's "The Algorithm Audit" paper: https://babl.ai/algorithm-auditing-framework/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, March 24, 2025
In this thought-provoking episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with Jeffery Recker and Bryan Ilg to unpack one of the most pressing topics of our time: AI’s impact on democracy. From algorithm-driven echo chambers and misinformation to the role of social media in shaping political discourse, the trio explores how AI is quietly—and sometimes loudly—reshaping our democratic systems. - What happens when personalized content becomes political propaganda? - Is YouTube the new social media without us realizing it? - Can regulations keep up with AI’s accelerating influence? - And are we already too far gone—or is there still time to rethink, regulate, and reclaim our democratic integrity? This episode dives into: - The unintended consequences of algorithmic curation - The collapse of objective reality in the digital age - AI-driven misinformation in elections - The tension between regulation and free speech - Global responses—from Finland’s education system to the EU AI Act - What society can (and should) do to fight back Whether you’re in tech, policy, or just trying to make sense of the chaos online, this is a conversation you won’t want to miss. 🔗 Jeffery’s free course, Intro to the EU AI Act, is available now! Get your Credly badge and learn how to start your compliance journey → https://babl.ai/introduction-to-the-eu-ai-act/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, March 17, 2025
In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by Jeffery Recker and Bryan Ilg to discuss the growing importance of AI literacy—what it means, why it matters, and how individuals and businesses can stay ahead in an AI-driven world. Topics covered: The evolution of AI education and BABL AI’s new subscription model for training & certifications. Why AI auditing skills are becoming essential for professionals across industries. How AI governance roles will shape the future of business leadership. The impact of AI on workforce transition and how individuals can future-proof their careers. The EU AI Act’s new AI literacy requirements—what they mean for organizations. Want to level up your AI knowledge? Check out BABL AI’s courses & certifications! 🚀 Subscribe to our courses: https://courses.babl.ai/p/the-algorithmic-bias-lab-membership 👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, March 03, 2025
In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown joins us live from RightsCon 2025 in Taipei to break down the latest conversations at the intersection of AI, human rights, and global policy. He’s joined by BABL AI COO Jeffery Recker and CSO Bryan Ilg, as they dive into the big takeaways from the conference and what it means for the future of AI governance. What’s in this episode? ✅ RightsCon Recap – How AI has taken over the human rights agenda ✅ AI Auditing & Accountability – Why organizations need to prove AI compliance ✅ Investors Are Paying Attention – Why AI risk management is becoming a priority ✅ The Role of Education – Why AI literacy is the key to ethical and responsible AI ✅ The International Association of Algorithmic Auditors – A new professional field is emerging 🚀 If you're passionate about AI, governance, and accountability, this episode is packed with insights you don’t want to miss.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, February 24, 2025
Join BABL AI CEO Dr. Shea Brown on Lunchtime BABLing as he sits down with UX Consultant Ezra Schwartz for an in-depth conversation about the evolving world of user experience—and how it intersects with responsible AI. In this episode, you'll discover: • Ezra’s Journey: From being a student in our AI & Algorithm Auditor Certification Program to becoming a seasoned UX consultant specializing in age tech. • Beyond UI Design: Ezra breaks down the true essence of UX, explaining how it’s not just about pretty interfaces, but about creating intuitive, accessible, and human-centered experiences that build trust and drive user satisfaction. • The Role of UX in AI: Learn how thoughtful UX design is essential in managing AI risks, facilitating cross-department collaboration, and ensuring that digital products truly serve their users. • Age Tech Insights: Explore how innovative solutions, from fall detection systems to digital caregiving tools, are reshaping life for our aging population—and the importance of balancing technology with privacy and ethical considerations. If you’re passionate about design, responsible AI, or just curious about the human side of technology, this episode is a must-listen. 👉 Connect with Ezra Schwartz: Website: https://www.artandtech.com LinkedIn: https://www.linkedin.com/in/ezraschwartz Responsible AgeTech Conference I’m organizing: https://responsible-agetech.orgCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, February 17, 2025
🇩🇪 People can join Quantpi's "RAI in Action" event series kicking off in Germany in March: 👉 https://www.quantpi.com/resources/events 🇺🇸 U.S. based folks can join Quantpi's GTC session on March 20th called "A scalable approach toward trustworthy AI": 👉 https://www.nvidia.com/gtc/session-catalog/?ncid=so-link-241456&linkId=100000328230011&tab.catalogallsessionstab=16566177511100015Kus&search=antoine#/session/1726160038299001jn0f 👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". 📚 Sign up for our courses today: https://babl.ai/courses/ 🔗 Follow us for more: https://linktr.ee/babl.ai 🎙️ Lunchtime BABLing: An Interview with Mahesh Chandra Mukkamala from Quantpi 🎙️ In this episode of Lunchtime BABLing, host Dr. Shea Brown, CEO of BABL AI, sits down with Mahesh Chandra Mukkamala, a data scientist from Quantpi, to discuss the complexities of black box AI testing, AI risk assessment, and compliance in the age of evolving AI regulations. 💡 Topics Covered: ✔️ What is black box AI testing, and why is it crucial? ✔️ How Quantpi ensures model robustness and fairness across different AI systems ✔️ The role of AI risk assessment in EU AI Act compliance and enterprise AI governance ✔️ Challenges businesses face in AI model evaluation and best practices for testing ✔️ Career insights for aspiring AI governance professionals With increasing regulatory pressure from laws like the EU AI Act, companies need to test their AI models rigorously. Whether you’re an AI professional, compliance officer, or just curious about AI governance, this conversation is packed with valuable insights on ensuring AI systems are trustworthy, fair, and reliable. 🔔 Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI governance insights from BABL AI! 📢 Listen to the podcast on all major podcast streaming platforms 📩 Connect with Mahesh on Linkedin: https://www.linkedin.com/in/maheshchandra/ 📌 Follow Quantpi for more AI insights: https://www.quantpi.comCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, February 10, 2025
Join host Dr. Shea Brown (CEO of BABL AI) along with guest speakers COO Jeffery Recker and CSO Bryan Ilg for an in-depth discussion on the rapidly evolving world of AI regulation. In this episode, our panel unpacks: The EU AI Act in Action: Learn about the new obligations now in force under the EU AI Act—including the crucial requirements of AI literacy (Article 4) and the prohibition of high-risk AI practices (Article 5). Compliance Timelines & What’s Next: Get the lowdown on the phased rollout, with upcoming standards and enforcement deadlines on the horizon, and discover practical steps companies should take to prepare. North American Regulatory Landscape: Explore the contrasting regulatory approaches in North America, from the shifting federal stance in the US to state-specific laws (like Colorado’s AI Act and New York’s local law 144), and why this uncertainty matters for businesses. Risk, Ethics & the Future of AI in Business: Delve into the importance of risk management, AI literacy training, and human-centered design. Our guests share insights on why responsible AI isn’t just about compliance—it’s also a competitive advantage in today’s fast-paced market. Whether you’re a business leader, technologist, or policy enthusiast, this episode offers valuable perspectives on how organizations can navigate the complex, global landscape of AI governance while protecting their customers and staying ahead of regulatory demands.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, January 27, 2025
🎙️ Lunchtime BABLing: Interview with Abhi Sanka 🎙️ Join BABL AI CEO Dr. Shea Brown as he chats with Abhi Sanka, a dynamic leader in responsible AI and a graduate of BABL AI's inaugural Algorithm Auditor Certificate Program. In this episode, Abhi reflects on his unique journey—from studying the ethics of the Human Genome Project at Duke University to shaping science and technology policy for the U.S. government, to now helping drive innovation at Microsoft. Explore Abhi's insights on the parallels between the Human Genome Project and the current AI revolution, the challenges of governing agentic AI systems, and the importance of building trust through responsible design. They also discuss the evolving landscape of AI assurance and the critical need for collaboration between industry, policymakers, and civil society. 📌 Highlights: Abhi’s academic and professional path to responsible AI. The challenges of auditing agentic AI and aligning governance frameworks. The importance of community and collaboration in advancing responsible AI. Abhi’s goals for 2025 and his passion for staying connected to the wider AI ethics community. Don’t miss this thought-provoking conversation packed with wisdom for anyone passionate about AI governance, policy, and innovation! 🔗 Abhi's Linkedin: https://www.linkedin.com/in/abhisanka/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, January 13, 2025
🎙️ In this engaging episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown sits down with special guest Soribel Feliz, a former US diplomat turned AI governance expert. Soribel shares her fascinating career journey from the State Department to big tech roles at Meta and Microsoft, and now as an AI governance and compliance specialist at DHS. 🚀 From her early experiences moderating content algorithms at Meta to advising on AI policy in the US Senate, Soribel discusses the evolution of AI, its ethical challenges, and the crucial importance of data privacy and workforce impacts. She also opens up about transitioning into the tech world, overcoming technical learning curves, and her dedication to helping others navigate career uncertainties in the AI-driven future. 🌍✨ 🔑 Key Highlights: Soribel's career leap from diplomacy to tech and AI policy. The ethical dilemmas and societal impacts of AI she’s witnessed firsthand. Her thoughts on AI literacy gaps and the need for growth mindset education. Practical advice for those transitioning into AI or confronting job uncertainties. 🌟 This episode is packed with wisdom, optimism, and actionable insights for young professionals, career changers, and anyone passionate about responsible AI. 📌 Follow Soribel Feliz for more on AI governance, career guidance, and navigating uncertainty in a rapidly evolving world. Links to her website and newsletter are in the description below. Linkedin: https://www.linkedin.com/in/soribel-f-b5242b14/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, December 30, 2024
🎙️ Lunchtime BABLing: 2024 - An AI Year in Review 🎙️ Join Shea Brown (CEO, BABL AI), Jeffery Recker (COO, BABL AI), and Bryan Ilg (CSO, BABL AI) as they reflect on an extraordinary year in AI! In this final episode of the year, the trio dives into: 🌟 The rapid growth of Responsible AI and algorithmic auditing in 2024. 📈 How large language models are redefining audits and operational workflows. 🌍 The global wave of AI regulations, including the EU AI Act, Colorado AI Act, and emerging laws worldwide. 📚 The rise of AI literacy and the "race for competency" in businesses and society. 🤖 Exciting (and risky!) trends like AI agents and their potential for transformation in 2025. Jeffery also shares an exciting update about his free online course, Introduction to Responsible AI, available until January 13th, 2025. Don’t miss this opportunity to earn a certification badge and join a live Q&A session! 🎉 Looking Ahead to 2025 What’s next for AI governance, standards like ISO 42001, and the evolving role of education in shaping the future of AI? The team shares predictions, insights, and hopes for the year ahead. 📌 Key Takeaways: AI is maturing rapidly, with businesses adopting governance frameworks and grappling with new regulations. Education and competency-building are essential to navigating the changing AI landscape. The global regulatory response is reshaping how AI is developed, deployed, and audited. Link to Raymon Sun's Techie Ray Global AI Regulation Tracker: https://www.techieray.com/GlobalAIRegulationTracker 💡 Don’t miss this thought-provoking recap of 2024 and the exciting roadmap for 2025!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, December 16, 2024
In this episode, BABL AI CEO Dr. Shea Brown interviews Aleksandr Tiulkanov, an expert in AI compliance and digital policy. Aleksandr shares his fascinating journey from being a commercial contracts lawyer to becoming a leader in AI policy at Deloitte and the Council of Europe. 🚀 🔍 What’s in this episode? The transition from legal tech to AI compliance. Key differences between the Council of Europe’s Framework Convention on AI and the EU AI Act. How the EU AI Act fits into Europe’s product safety legislation. The challenges and confusion around conformity assessments and AI literacy requirements. Insights into Aleksandr’s courses designed for governance, risk, and compliance professionals. 🛠️ Aleksandr also dives into practical advice for preparing for the EU AI Act, even in the absence of finalized standards, and the role of frameworks like ISO 42,001. 📚 Learn more about Aleksandr’s courses: https://aia.tiulkanov.info 🤝 Follow Aleksandr on LinkedIn: https://www.linkedin.com/in/tyulkanov/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, December 02, 2024
In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker and Bryan Ilg to tackle one of the most pressing questions of our time: How will AI impact the future of work? From fears of job displacement to the rise of entirely new roles, the trio explores: 🔹 How AI will reshape industries and automate parts of our jobs. 🔹 The importance of upskilling to stay competitive in an AI-driven world. 🔹 Emerging career paths in responsible AI, compliance, and risk management. 🔹 The delicate balance between technological disruption and human creativity. 📌 Whether you're a seasoned professional, a student planning your career, or just curious about the future, this episode has something for you. 👉 Don’t miss this insightful conversation about navigating the rapidly changing job market and preparing for a future where AI is a part of nearly every role. 🎧 Listen on your favorite podcast platform or watch the full discussion here. Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI trends and insights!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, November 18, 2024
🎙️ Lunchtime BABLing Podcast: What Will a Trump Presidency Mean for AI Regulations? In this thought-provoking episode, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg to explore the potential impact of a Trump presidency on the landscape of AI regulation. 🚨🤖 Key topics include: Federal deregulation and the push for state-level AI governance. The potential repeal of Biden's executive order on AI. Implications for organizations navigating a fragmented compliance framework. The role of global AI policies, such as the EU AI Act, in shaping U.S. corporate strategies. How deregulation might affect innovation, litigation, and risk management in AI development. This is NOT a political podcast—we focus solely on the implications for AI governance and the tech landscape in the U.S. and beyond. Whether you're an industry professional, policymaker, or tech enthusiast, this episode offers essential insights into the evolving world of AI regulation.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, November 04, 2024
Welcome to a special Lunchtime BABLing episode, BABL Deep Dive, hosted by BABL AI CEO Dr. Shea Brown and Chief Sales Officer Brian Ilg. This in-depth discussion explores the fundamentals and nuances of AI assurance—what it is, why it's crucial for modern enterprises, and how it works in practice. Dr. Brown breaks down the concept of AI assurance, highlighting its role in mitigating risks, ensuring regulatory compliance, and building trust with stakeholders. Brian Ilg shares key insights from his conversations with clients, addressing common questions and challenges that arise when organizations seek to audit and assure their AI systems. This episode features a detailed presentation from a recent risk conference, offering a behind-the-scenes look at how BABL AI conducts independent AI audits and assurance engagements. If you're a current or prospective client, an executive curious about AI compliance, or someone exploring careers in AI governance, this episode is packed with valuable information on frameworks, criteria, and best practices for AI risk management. Watch now to learn how AI assurance can protect your organization from potential pitfalls and enhance your reputation as a responsible, forward-thinking entity in the age of AI!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, October 21, 2024
👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20". 📚 Courses Mentioned: 1️⃣ AI Literacy Requirements Course: https://courses.babl.ai/p/ai-literacy-for-eu-ai-act-general-workforce 2️⃣ EU AI Act - Conformity Requirements for High-Risk AI Systems Course: https://courses.babl.ai/p/eu-ai-act-conformity-requirements-for-high-risk-ai-systems 3️⃣ EU AI Act - Quality Management System Certification: https://courses.babl.ai/p/eu-ai-act-quality-management-system-oversight-certification 4️⃣ BABL AI Course Catalog: https://babl.ai/courses/ 🔗 Follow us for more: https://linktr.ee/babl.ai In this episode of Lunchtime BABLing, CEO Dr. Shea Brown dives into the "AI Literacy Requirements of the EU AI Act," focusing on the upcoming compliance obligations set to take effect on February 2, 2025. Dr. Brown explains the significance of Article 4 and discusses what "AI literacy" means for companies that provide or deploy AI systems, offering practical insights into how organizations can meet these new regulatory requirements. Throughout the episode, Dr. Brown covers: AI literacy obligations for providers and deployers under the EU AI Act. The importance of AI literacy in ensuring compliance. An overview of BABL AI’s upcoming courses, including the AI Literacy Training for the general workforce, launching November 4.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, October 07, 2024
In this episode of Lunchtime BABLing, hosted by Dr. Shea Brown, CEO of BABL AI, we're joined by frequent guest Jeffery Recker, Co-Founder and Chief Operating Officer of BABL AI. Together, they dive into an interesting question in the AI world today: Will AI really replace our jobs? Drawing insights from a recent interview with MIT economist Daron Acemoglu, Shea and Jeffery discuss the projected economic impact of AI and what they believe the hype surrounding AI-driven job loss will actually look like. With only 5% of jobs expected to be heavily impacted by AI, is the AI revolution really what everyone thinks it is? They explore themes such as the overcorrection in AI investment, the role of responsible AI governance, and how strategic implementation of AI can create competitive advantages for companies. Tune in for an honest and insightful conversation on what AI will mean for the future of work, the economy, and beyond. If you enjoy this episode, don't forget to like and subscribe for more discussions on AI, ethics, and technology!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, September 23, 2024
Welcome back to another insightful episode of Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker dive into a fascinating discussion on how the NIST AI Risk Management Framework could play a crucial role in guiding companies like Deloitte through Federal Trade Commission (FTC) investigations. In this episode, Shea and Jeffery on a recent complaint filed against Deloitte regarding its automated decision system for Medicaid eligibility in Texas, and how adherence to established frameworks could have mitigated the issues at hand. 📍 Topics discussed: Deloitte’s Medicaid eligibility system in Texas The role of the FTC and the NIST AI Risk Management Framework How AI governance can safeguard against unintentional harm Why proactive risk management is key, even for non-AI systems What companies can learn from this case to improve compliance and oversight Tune in now and stay ahead of the curve! 🔊✨ 👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, September 02, 2024
Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, August 12, 2024
In the second part of our in-depth discussion on the EU AI Act, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker continue to explore the essential steps organizations need to take to comply with this groundbreaking regulation. If you missed Part One, be sure to check it out, as this episode builds on the foundational insights shared there. In this episode, titled "Where to Get Started with the EU AI Act: Part Two," Dr. Brown and Mr. Recker dive deeper into the practical aspects of compliance, including: Documentation & Transparency: Understanding the extensive documentation and transparency measures required to demonstrate compliance and maintain up-to-date records. Challenges for Different Organizations: A look at how compliance challenges differ for small and medium-sized enterprises compared to larger organizations, and what proactive steps can be taken. Global Compliance Considerations: Discussing the merits of pursuing global compliance strategies and the implications of the EU AI Act on businesses operating outside the EU. Enforcement & Penalties: Insight into how the EU AI Act will be enforced, the bodies responsible for oversight, and the significant penalties for non-compliance. Balancing Innovation with Regulation: How the EU AI Act aims to foster innovation while ensuring that AI systems are human-centric and trustworthy. Whether you're a startup navigating the complexities of AI governance or a large enterprise seeking to align with global standards, this episode offers valuable guidance on how to approach the EU AI Act and ensure your AI systems are compliant, trustworthy, and ready for the future. 🔗 Key Topics Discussed: What documentation and transparency measures are required to demonstrate compliance? How can businesses effectively maintain and update these records? How will the EU AI Act be enforced, and which bodies are responsible for its oversight and implementation? What are the biggest challenges you foresee in complying with the EU AI Act? What resources or support mechanisms are being provided to businesses to help them comply with the new regulations? How does the EU AI Act balance the need for regulation with the need to foster innovation and competitiveness in the AI sector? What are the penalties for non-compliance, and how will they be determined and applied? What guidelines should entities follow to ensure their AI systems are human-centric and trustworthy? What proactive measures can entities take to ensure their AI systems remain compliant as technology and regulations evolve? How do you see the EU AI Act evolving in the future, and what additional measures or amendments might be necessary? 👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, August 12, 2024
In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to kick off a deep dive into the EU AI Act. Titled "Where to Get Started with the EU AI Act: Part One," this episode is designed for organizations navigating the complexities of the new regulations. With the EU AI Act officially in place, the discussion centers on what businesses and AI developers need to do to prepare. Dr. Brown and Mr. Recker cover crucial topics including the primary objectives of the Act, the specific aspects of AI systems that will be audited, and the high-risk AI systems requiring special attention under the new regulations. The episode also tackles practical questions, such as how often audits should be conducted to ensure ongoing compliance and how much of the process can realistically be automated. Whether you're just starting out with compliance or looking to refine your approach, this episode offers valuable insights into aligning your AI practices with the requirements of the EU AI Act. Don't miss this informative session to ensure your organization is ready for the changes ahead! 🔗 Key Topics Discussed: What are the primary objectives of the EU AI Act, and how does it aim to regulate AI technologies within the EU? What impact will this have outside the EU? What specific aspects of AI systems will need conformity assessments for compliance with the EU AI Act? Are there any particular high-risk AI systems that require special attention under the new regulations? How do you assess and manage the risks associated with AI systems? What are the key provisions and requirements of the Act that businesses and AI developers need to be aware of? How are we ensuring that our AI systems comply with GDPR and other relevant data protection regulations? How often should these conformity assessments be conducted to ensure ongoing compliance with the EU AI Act? 📌 Stay tuned for Part Two where we continue this discussion with more in-depth analysis and practical tips! 👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes. #AI #EUAIACT #ArtificialIntelligence #Compliance #TechRegulation #AIAudit #LunchtimeBABLing #BABLAICheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, July 08, 2024
Welcome back to Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and Bryan Ilg delve into the crucial topic of "Building Trust in AI." Episode Highlights: Trust Survey Insights: Bryan shares findings from a recent PwC trust survey, highlighting the importance of trust between businesses and their stakeholders, including consumers, employees, and investors. AI's Role in Trust: Discussion on how AI adoption impacts trust and the bottom line for organizations. Internal vs. External Trust: Insights into the significance of building both internal (employee) and external (consumer) trust. Responsible AI: Exploring the need for responsible AI strategies, data privacy, bias and fairness, and the importance of transparency and accountability. Practical Steps: Tips for businesses on how to bridge the trust gap and effectively communicate their AI governance and responsible practices. Join us as we explore how businesses can build a trustworthy AI ecosystem, ensuring ethical practices and fostering a strong relationship with all stakeholders. If you enjoyed this episode, please like, subscribe, and share your thoughts in the comments below!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, July 01, 2024
Join us for an insightful episode of "Lunchtime BABLing" as BABL AI CEO Shea Brown and VP of Sales Bryan Ilg dive deep into New York City's Local Law 144, a year after its implementation. This law mandates the auditing of AI tools used in hiring for bias, ensuring fair and equitable practices in the workplace. Episode Highlights: Understanding Local Law 144: A breakdown of what the law entails, its goals, and its impact on employers and AI tool providers. Year One Insights: What has been learned from the first year of compliance, including common challenges and successes. Preparing for Year Two: Key considerations for organizations as they navigate the second year of compliance. Learn about the nuances of data sharing, audit requirements, and maintaining compliance. Data Types and Testing: Detailed explanation of historical data vs. test data, and their roles in bias audits. Practical Advice: Decision trees and strategic advice for employers on how to handle their data and audit needs effectively. This episode is packed with valuable information for employers, HR professionals, and AI tool providers to ensure compliance with New York City's AI bias audit requirements. Stay informed and ahead of the curve with expert insights from Shea and Bryan. 🔗 Don't forget to like, subscribe, and share! If you're watching on YouTube, hit the like button and subscribe to stay updated with our latest episodes. If you're tuning in via podcast, thank you for listening! See you next week on Lunchtime BABLing.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, June 03, 2024
In this insightful episode of Lunchtime BABLing, BABL AI CEO Shea Brown and COO Jeffery Recker dive deep into Colorado's pioneering AI Consumer Protection Law. This legislation marks a significant move at the state level to regulate artificial intelligence, aiming to protect consumers from algorithmic discrimination. Shea and Jeffery discuss the implications for developers and deployers of AI systems, emphasizing the need for robust risk assessments, documentation, and compliance strategies. They explore how this law parallels the EU AI Act, focusing particularly on discrimination and the responsibilities laid out for both AI developers and deployers. Listeners, don't miss the chance to enhance your understanding of AI governance with a special offer from BABL AI: Enjoy 20% off all courses using the coupon code "BABLING20." Explore our courses here: https://courses.babl.ai/ For a deeper dive into Colorado's AI law, check out our detailed blog post: "Colorado's Comprehensive AI Regulation: A Closer Look at the New AI Consumer Protection Law". Don't forget to subscribe to our newsletter at the bottom of the page for the latest updates and insights. Link to the blog here: https://babl.ai/colorados-comprehensive-ai-regulation-a-closer-look-at-the-new-ai-consumer-protection-law/ Timestamps: 00:21 - Welcome and Introductions 00:43 - Overview of Colorado's AI Consumer Protection Law 01:52 - State vs. Federal Initiatives in AI Regulation 04:00 - Detailed Discussion on the Law's Provisions 07:02 - Risk Management and Compliance Techniques 09:51 - Importance of Proper Documentation 12:21 - Developer and Deployer Obligations 17:12 - Strategies for Public Disclosure and Risk Notification 20:48 - Annual Impact Assessments 22:44 - Transparency in AI Decision-Making 24:05 - Consumer Rights in AI Decisions 26:03 - Public Disclosure Requirements 28:36 - Final Thoughts and Takeaways Remember to like, subscribe, and comment with your thoughts or questions. Your interaction helps us bring more valuable content to you!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, May 06, 2024
🎙️ Welcome back to Lunchtime BABLing, where we bring you the latest insights into the rapidly evolving world of AI ethics and governance! In this episode, BABL AI CEO Shea Brown and VP of Sales Bryan Ilg delve into the intricacies of the newly released NIST AI Risk Management Framework, with a specific focus on its implications for generative AI technologies. 🔍 The conversation kicks off with Shea and Bryan providing an overview of the NIST framework, highlighting its significance as a voluntary guideline for governing AI systems. They discuss how the framework's "govern, map, measure, manage" functions serve as a roadmap for organizations to navigate the complex landscape of AI risk management. 📑 Titled "NIST AI Risk Management Framework: Generative AI Profile," this episode delves deep into the companion document that focuses specifically on generative AI. Shea and Bryan explore the unique challenges posed by generative AI in terms of information integrity, human-AI interactions, and automation bias. 🧠 Shea provides valuable insights into the distinctions between AI, machine learning, and generative AI, shedding light on the nuanced risks associated with generative AI's ability to create content autonomously. The discussion delves into the implications of misinformation and disinformation campaigns fueled by generative AI technologies. 🔒 As the conversation unfolds, Shea and Bryan discuss the voluntary nature of the NIST framework and explore strategies for driving industry-wide adoption. They examine the role of certifications and standards in building trust and credibility in AI systems, emphasizing the importance of transparent and accountable AI governance practices. 🌐 Join Shea and Bryan as they navigate the complex terrain of AI risk management, offering valuable insights into the evolving landscape of AI ethics and governance. Whether you're a seasoned AI practitioner or simply curious about the ethical implications of AI technologies, this episode is packed with actionable takeaways and thought-provoking discussions. 🎧 Tune in now to stay informed and engaged with the latest advancements in AI ethics and governance, and join the conversation on responsible AI development and deployment!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, April 08, 2024
In this episode of the Lunchtime BABLing Podcast, Dr. Shea Brown, CEO of BABL AI, dives into the intricacies of the EU AI Act alongside Jeffery Recker, the COO of BABL AI. Titled "The EU AI Act: Prohibited and High-Risk Systems and why you should care," this conversation sheds light on the recent passing of the EU AI Act by the parliament and its implications for businesses and individuals alike. Dr. Brown and Jeffery explore the journey of the EU AI Act, from its proposal to its finalization, outlining the key milestones and upcoming steps. They delve into the categorization of AI systems into prohibited and high-risk categories, discussing the significance of compliance and the potential impacts on businesses operating within the EU. The conversation extends to the importance of understanding biases in AI algorithms, the complexities surrounding compliance, and the value of getting ahead of the curve in implementing necessary measures. Dr. Brown offers insights into how BABL AI assists organizations in navigating the regulatory landscape, emphasizing the importance of building trust and quality products in the AI ecosystem. Key Topics Covered: Overview of the EU AI Act and its journey to enactment Differentiating prohibited and high-risk AI systems Understanding biases in AI algorithms and their implications Compliance challenges and the importance of early action How BABL AI supports organizations in achieving compliance and building trust Why You Should Tune In: Whether you're a business operating within the EU or an individual interested in the impact of AI regulation, this episode provides valuable insights into the evolving regulatory landscape and its implications. Dr. Shea Brown and Jeffery Recker offer expert perspectives on navigating compliance challenges and the importance of ethical AI governance. Don't Miss Out: Subscribe to the Lunchtime BABLing Podcast for more thought-provoking discussions on AI, ethics, and governance. Stay tuned for upcoming episodes and join the conversation on critical topics shaping the future of technology.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, March 18, 2024
Join us in this latest episode of the Lunchtime BABLing Podcast, where Shea Brown, CEO of BABL AI, shares invaluable insights from a live webinar Q&A session on carving out a niche in AI Ethics Consulting. Dive deep into the world of AI ethics, algorithm auditing, and the journey of building a boutique firm focused on ethical risk, bias, and effective governance in AI technologies. In This Episode: Introduction to AI Ethics Consulting: Shea Brown introduces the session, providing a backdrop for his journey and the birth of BABL AI. Journey of BABL AI: Discover the challenges and milestones in creating and growing an AI ethics consulting firm. Insights from the Field: Shea shares his experiences and learnings from auditing algorithms for ethical risks and navigating the evolving landscape of AI ethics. Live Q&A Highlights: Audience questions range from enrolling in AI ethics courses, the role of lawyers in AI audits, to the importance of philosophy in AI ethics consulting. Advice on Career Pivoting: Shea offers advice on pivoting into AI ethics consulting, highlighting the importance of understanding regulatory requirements and finding one’s niche. Auditing Process Explained: Get a high-level overview of the auditing process, including the distinction between assessments and formal audits. Building a Career in AI Ethics: Discussion on the demand for AI ethics consulting, networking strategies, and the interdisciplinary nature of audit teams. Key Takeaways: The essential blend of skills needed in AI ethics consulting. Insights into the challenges and opportunities in the field of AI ethics. Practical advice for individuals looking to enter or pivot into AI ethics consulting. Don’t miss this opportunity to learn from one of the pioneers in AI ethics consulting. Whether you’re new to the field or looking to deepen your knowledge, this episode is packed with insights, experiences, and advice to guide you on your journey. Listeners can use coupon code "FREEFEB" to get our "Finding Your Place in AI Ethics Consulting" course for free. Link on our Website. Lunchtime BABLing listeners can use coupon code "BABLING" to save 20% on all our course offerings. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, February 19, 2024
Welcome to another enlightening episode of Lunchtime BABLing, proudly presented by BABL AI, where we dive deep into the evolving world of artificial intelligence and its governance. In this episode, Shea is thrilled to bring you a series of exciting updates and educational insights that are shaping the future of AI. What's Inside: 1. BABL AI Joins the NIST Consortium: We kick off with the groundbreaking announcement that BABL AI has officially become a part of the prestigious NIST consortium. Discover what this means for the future of AI development and governance, and how this collaboration is set to elevate the standards of AI technologies and applications. 2. Introducing ISO 42001: Next, Shea delves into the newly announced ISO 42001, a comprehensive governance framework that promises to redefine AI governance. Join Shea as she explores the high-level components of this auditable framework, shedding light on its significance and the impact it's poised to have on the AI industry. 3. Aligning Education with Innovation: We also explore how BABL AI’s online courses are perfectly aligned with the NIST AI framework, ISO 42001, and other pivotal regulations and frameworks. Learn how our educational offerings are designed to empower you with the competencies needed to navigate and excel in the complex landscape of AI governance. Whether you're a professional looking to enhance your skills or a student eager to enter the AI field, our courses offer invaluable insights and knowledge that align with the latest standards and frameworks.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, February 05, 2024
Sign up for Free to our online course "Finding your place in AI Ethics Consulting," during the month of February 2024. 🌍 In this news episode of Lunchtime BABLing, Shea does dive deep into the complex world of AI regulatory compliance on a global scale. As the digital frontier expands, understanding and adhering to AI regulations becomes crucial for businesses and technologists alike. This episode offers a high-level guide on what to consider for AI regulatory compliance globally. 🔍 Highlights of This Episode: EU AI Act: Your Compliance Compass - Discover how the European Union's AI Act serves as a holistic framework that can guide you through 95% of global AI compliance challenges. Common Grounds in Global AI Laws - Shea explore the shared foundations across various AI regulations, highlighting the common themes across global regulatory requirements. Proactive Mindset Shift - The importance of shifting corporate mindsets towards proactive risk management in AI cannot be overstated. We discuss why companies must start establishing Key Performance Indicators (KPIs) now to identify and mitigate risks before facing legal consequences. NIST's Role in Measuring AI Risk - Get insights into how the National Institute of Standards and Technology (NIST) is developing methodologies to quantify risk in AI systems, and what this means for the future of AI. 🚀 Takeaway: This episode is a must-listen for anyone involved in AI development, deployment, or governance. Whether you're a startup or a multinational corporation, aligning with global AI regulations is imperative. Lunchtime BABLing will provide you with the knowledge and strategies to navigate this complex landscape effectively, ensuring your AI solutions are not only innovative but also compliant and ethical. 👉 Subscribe to our channel for more insights into AI technology and its global impact. Don't forget to hit the like button if you find this episode valuable and share it with your network to spread the knowledge. #AICompliance #EUAIAct #AIRegulation #RiskManagement #TechnologyPodcast #AIethics #GlobalAI #ArtificialIntelligenceCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, January 29, 2024
Sign up for our Free during the month of February for our online course Finding you place in AI Ethics Consulting. Link here: https://courses.babl.ai/p/finding-your-place-ai-ethics-consulting Lunchtime BABLing listeners can save 20% off all our online courses by using coupon code "BABLING." Link here: https://babl.ai/courses/ 🤖 Welcome to another engaging episode of Lunchtime BABLing! In this episode, we delve into the intricate world of AI ethics with a special focus on its socio-technical aspects. 🎙️ Join our host, Shea Brown, as they welcome a distinguished guest, Borhane Blili-Hamelin, PhD. Together, they explore some thought-provoking parallels between implementing AI ethics in industry and research environments. This discussion promises to shed light on the challenges and nuances of applying ethical principles in the fast-evolving field of artificial intelligence. 🔍 The conversation is not just theoretical but is grounded in ongoing research. Borhane Blili-Hamelin and Leif Hancox-Li's joint work, which was a highlight at the NeurIPS 2022 workshop, forms the basis of this insightful discussion. The workshop, held on November 28 and December 5, 2022, provided a platform for presenting their findings and perspectives. Link to paper here: https://arxiv.org/abs/2209.00692 💡 Whether you're a professional in the field, a student, or just someone intrigued by the ethical dimensions of AI, this episode is a must-watch! So, grab your lunch, sit back, and let's BABL about the socio-technical side of AI ethics. 👍 Don't forget to like, share, and subscribe for more insightful episodes of Lunchtime BABLing. Your support helps us continue to bring fascinating topics and expert insights to your screen. 📢 We love hearing from you! Share your thoughts on this episode in the comments below. What are your views on AI ethics in industry versus research? Let's keep the conversation going! 🔔 Stay tuned for more episodes by hitting the bell icon to get notified about our latest uploads. #LunchtimeBABLing #AIethics #SocioTechnical #ArtificialIntelligence #EthicsInAI #NeurIPS2022 #AIResearch #IndustryVsResearch #TechEthicsCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, January 22, 2024
📺 About This Episode: Join us on a riveting journey into the heart of AI integration in the business world in our latest episode of Lunchtime BABLing, where we talk about "What Things Should Companies Consider When Implementing AI." Host Shea Brown, CEO of BABL AI, teams up with Bryan Ilg, our VP of Sales, to unravel the complexities and opportunities presented by AI in the modern business landscape. In this episode, we dive deep into the nuances of AI implementation, shedding light on often-overlooked aspects such as reputational and regulatory risks, and the paramount importance of trust and effective governance. Shea and Bryan offer their expert insights into the criticality of establishing robust AI governance frameworks and enhancing existing strategies to stay ahead in this rapidly evolving domain. Whether you're a business owner, an executive, or simply intrigued by the ethical and practical dimensions of AI in business, this episode is packed with valuable insights and actionable advice. 🔗 Stay Connected: Hit that like and subscribe button for more enlightening episodes. Tune into our podcast across various platforms for your on-the-go AI insights. 👋 Thank you for joining us on Lunchtime BABLing as we explore the intricate dance of AI, business, and ethics. Can't wait to share more in our upcoming episodes!Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, January 15, 2024
Description: 🔊 Welcome to another episode of Lunchtime BABLing, where we dive deep into the world of AI and its impact on our lives. In this episode, "Key Takeaways of the EU AI Act," join our hosts, Shea Brown, CEO of BABL AI, and Jeffrey Recker, for a comprehensive analysis of the recently agreed-upon EU AI Act. 🌍 The EU AI Act is making waves as a global law that regulates the use of artificial intelligence. It's comparable to how GDPR reshaped privacy laws, and now the EU AI Act is set to do the same for AI. This episode breaks down the Act's implications, its potential effects on companies and individuals, and what the future of AI governance might look like under this new regulation. 🔑 Highlights of the episode include: A detailed explanation of what the EU AI Act entails and why it's a game-changer. Insights into who will be affected by the Act and how it extends beyond European borders. The classification of AI systems under the Act based on risk levels, including prohibited and high-risk categories. A look into the conformity assessment process and the compliance requirements for organizations. Practical steps organizations should take to prepare for compliance. 🤔 Whether you're a tech enthusiast, an AI professional, or just curious about how AI laws impact our world, this episode offers valuable insights. Join us as we unravel the complexities of the EU AI Act and its far-reaching consequences. 📣 Do you have specific questions about the EU AI Act or AI governance? Leave your comments below or reach out to us! Don't forget to like and subscribe if you're watching on YouTube, or thank you for listening if you're tuning in via podcast. Stay informed and ahead in the world of AI with Lunchtime BABLing! #EUAIAct #ArtificialIntelligence #AILaw #TechGovernance #BabbleAI #PodcastCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, January 08, 2024
Lunchtime BABLing listeners can use Coupon Code "BABLING" to save an 20% off all BABL AI courses. Courses: https://courses.babl.ai/p/ai-and-algorithm-auditor-certification Description: Welcome back to another episode of Lunchtime Babbling! In this episode, Shea Brown, CEO of BABL AI, joins forces with Jeffrey Recker, our COO, to delve into an intriguing topic on the newly formed International Association of Algorithmic Auditors (IAAA). Throughout the episode, Shea and Jeffrey unpack the crucial role of the IAAA, in shaping the landscape of AI and algorithm auditing. They discuss the association's goals, its distinction from existing organizations, and its significance in ensuring that algorithms are audited for compliance, ethical standards, and the prevention of potential harm to individuals and society. The discussion also highlights the challenges and complexities involved in algorithmic auditing, the importance of professional conduct in the field, and the emerging regulations like the EU AI Act. Moreover, they explore the different types of algorithmic audits and the vital role of transparency in the auditing process. As one of the key founding members of the IAAA, Shea provides insights into the formation of this organization, its mission, and the importance of fostering a professional community among AI and algorithm auditors. Whether you're a professional in the field, someone interested in the ethical aspects of AI, or simply curious about the future of technology governance, this episode offers valuable perspectives and critical discussions on the evolving world of algorithmic auditing. IAAA website: https://iaaa-algorithmicauditors.org 🎙️ Listen to the full episode to understand the significance of algorithmic audits, the role of IAAA in shaping the industry, and the future of AI governance. Don't forget to like and subscribe for more insightful discussions on Lunchtime BABLing! #AI #AlgorithmicAuditing #IAAA #TechnologyEthics #LunchtimeBABLingCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, December 18, 2023
Understanding the EU AI Act: Fundamental Rights Impact Assessments Explained Description: Join us in this eye-opening episode of the Lunchtime BABLing Podcast where Shea Brown, our host and CEO from BABL AI, teams up with Jeffery Recker, our COO, to delve deep into the recent developments in AI regulation, particularly focusing on the EU AI Act. This episode, "Understanding Fundamental Rights Impact Assessments in the EU AI Act," is a must-listen for anyone interested in the intersection of AI, regulation, and human rights. Key Discussion Points: Introduction to the EU AI Act: Gain insights into the EU AI Act's passing and its significance in shaping the future of AI regulation. Role of Fundamental Rights Impact Assessments: Understand what these assessments are, their importance, and how they differ from traditional impact assessments. Impact on Businesses and AI Deployers: Learn about the new obligations for companies, especially those deploying high-risk AI systems. Practical Steps for Compliance: Shea Brown breaks down complex regulatory requirements into actionable steps for businesses of all sizes. Future of AI and Trust: Discover how compliance with these regulations can build trust and pave the way for responsible AI innovation. Episode Highlights: Expert Insights: Jeffery Recker shares his firsthand experience with the increasing interest in AI regulations and the challenges faced by businesses. Detailed Breakdown: Shea Brown offers a comprehensive analysis of the Fundamental Rights Impact Assessments, their implications, and the overall impact of the EU AI Act on the AI landscape. Interactive Discussions: Engaging conversation between Shea and Jeffery, providing a nuanced understanding of the subject.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, November 06, 2023
🔹 New Episode: National Conference on AI Law, Ethics, and Compliance In this latest installment of Lunchtime BABLing, Shea unpacks the developments from a major conference in Washington D.C., focusing on AI law, ethics, and compliance. He shares valuable insights from their workshop and interactions with legal experts in the field of AI governance. Key Discussions: -Understanding AI and the risks involved. -Governance frameworks for AI deployment. -The implications of the recent U.S. Executive Order on AI. -Global initiatives for AI safety and governance. Industry Spotlight: -The surge of generative AI in corporate strategy. -The evolving landscape of AI policy, privacy concerns, and intellectual property. Engage with Us: Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses: Coupon Code: “BABLING” Link to the full AI and Algorithm Auditing Certificate Program is here: https://courses.babl.ai/p/ai-and-algorithm-auditor-certificationCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
Tue, October 24, 2023
Lunchtime BABLing is back with a new season! In this episode, Shea briefly talks about what to expect in the upcoming weeks for Lunchtime BABLing, as well as diving into some detail about our AI and Algorithm Auditing Certification Program. Lunchtime BABLing views/listeners can use a the coupon code below to receive 20% off all our online courses: Coupon Code: "BABLING" Link to the full AI and Algorithm Auditing Certificate Program is here: https://courses.babl.ai/p/ai-and-algorithm-auditor-certification For more information about BABL AI and our services, as well as the latest news in AI Auditing and AI Governance, check out our website: https://babl.ai/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, May 08, 2023
On this week's Lunchtime BABLing, Shea talks with BABL AI auditor and technical expert, Khoa Lam. They discuss a wide range of topics including: 1: How Khoa got into the field of Responsible AI 2: His work at AI Incident Database 3: His thoughts on generative AI and large language models 4: The technical aspects of AI and Algorithmic Auditing Khoa Lam Linkedin: https://www.linkedin.com/in/khoalklam/ AI Incident Database: https://incidentdatabase.ai BABL AI Courses: https://courses.babl.ai/ Website: https://babl.ai/ Linkedin: https://www.linkedin.com/company/babl-ai/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Wed, April 26, 2023
We welcomed back AI auditor and consultant Jiahao Chen to discuss all things responsible AI! Check out Jiahao at: https://responsibleai.tech/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Fri, April 21, 2023
In this episode Shea reviews the new rules for NYC's Local Law No. 144, which requires bias audits of automated employment decision tools. The date for enforcement has been pushed back to July 5th, 2023 to give time for companies to seek independent auditors (which is still a requirement). Sign up for our new "AI & Algorithm Auditor Certification Program" starting May 8th! https://courses.babl.ai/p/ai-and-algorithm-auditor-certification?affcode=616760_7ts3gujl Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sun, April 02, 2023
This week on Lunchtime BABLing, we discuss: 1: The power, hype, and dangers of large language models like ChatGPT. 2: The recent open letter asking for a moratorium on AI research. 3: In context learning of large language models the problems for auditing. 4: NIST's AI Risk Management Framework and its influence on public policy like California's ASSEMBLY BILL NO. 331. 5: Updates on The Algorithmic Bias Lab's new training program for AI auditors. https://babl.ai https://courses.babl.ai/?affcode=616760_7ts3gujlCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
Thu, March 30, 2023
This week we discuss our recent report "The Current State of AI Governance", which is the culmination of a year-long research project looking into the effectiveness of AI governance controls. Full report here: https://babl.ai/wp-content/uploads/2023/03/AI-Governance-Report.pdf We also discuss our new training program, the "AI & Algorithm Auditor Certificate Program", which starts in May 2023. This program has courses and certifications in 5 key areas necessary for AI auditing and Responsible AI in general: 1: Algorithms, AI, & Machine Learning 2: Algorithmic Risk & Impact Assessments 3: AI Governance & Risk Management 4: Bias, Accuracy, & the Statistics of AI Testing 5: Algorithm Auditing & Assurance Early pricing can be found here: https://courses.babl.ai/?affcode=616760_7ts3gujl BABL AI: https://babl.aiCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
Mon, March 20, 2023
On this week's Lunchtime BABLing (#19) we talk with Jiahao Chen; data scientist, researcher, and founder of Responsible Artificial Intelligence LLC. We discuss the evolving debate around large language models (LLMs) and their derivatives (ChatGPT, Bard, Bing AI Chatbot, etc.), including: 1: Do systems like ChatGPT reason? 2: How do businesses know whether LLMs are useful (and safe) for them to use in a product or business process? 3: What kinds of gaurdrails are needed for the ethical use of LLMs (includes pompt engineering). 4: Black-box vs. Whitebox testing of LLMs for algorithm auditing. 5: Classical assessments of intelligence and their applicability to LLMs. 6: Re-thinking education and assessment in the age of AI. Jiahao Chen Twitter: https://twitter.com/acidflask Responsible AI LLC: https://responsibleai.tech/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Wed, March 15, 2023
You need way more than "five skills" to be an AI auditor, but there are five areas of study that auditors need basic competency in if they want to do the kinds of audits that BABL AI performs. This is part of our weekly webinar/podcast that went very long so we've cut out a lot of the Q&A, which covered a lot of questions that we'll address in future videos, like: What kind of training do I need to become an AI or algorithm auditor? Do I need technical knowledge of machine learning to do AI ethics? Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sun, March 05, 2023
On this week's Lunchtime BABLing, Shea goes over the difference between a direct engagement audit vs. an attestation engagement audit and give examples from our criteria-based attestation audit for NYC Local Law No. 144. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sun, February 26, 2023
In this Q&A session, Shea talks about strategies for applying the skills you already have to the emerging field of AI ethics, governance, and policy consulting? This is a follow-up to our first webinar on the topic. Questions include: 1. Do I need an advanced degree to work in responsible AI? 2. How do I know what topics to focus on? 3. Do I need programming skills to work in responsible AI? 4. Where can I find training in AI ethics? Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sun, February 19, 2023
In this episode of Lunchtime BABLing, we discuss the emergence of new standards for AI Risk Management, as well as regulatory requirements involving AI risk management, including: 1: NIST AI Risk Management Framework 2: ISO/IEC 23894:2023 - Information technology — Artificial intelligence — Guidance on risk management 3: Colorados SB21-169 - Protecting Consumers from Unfair Discrimination in Insurance Practices 4: Risk Management in the [EU] Artificial Intelligence Act 5: BABL AI Cheat Sheet on AI Governance Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sun, February 12, 2023
Today on Lunchtime BABLing, Shea talks with the Chief Data Science Officer at Harver, Dr. Frida Polli. Prior to being at Harver, Frida was the founder and CEO of pymetrics; we talk about: ✅ How AI is being used in hiring, ✅ What it takes to use it responsibly, ✅ Her own journey through this space, and ✅ Reflections on upcoming regulations, including the recent New York City Local Law 144 and the EU AI Act. Frida: https://www.linkedin.com/in/frida-polli-phd-03a1855/ Harver: https://harver.com/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sun, February 05, 2023
Today on Lunchtime BABLing, Shea talks with the Executive Director of the non-profit ForHumanity, Ryan Carrier, FHCA. Ryan discusses: ✅ What role ForHumanity plays in the AI & Algorithm Auditing ecosystem ✅ Recent laws and activities that are relevant to AI auditing, including the DSA, NYC Bias Audit Law (Local Law 144), the EU AI Act, EEOC guidelines, and more ✅ Ways for people to get involved in the ecosystem. ForHumanity: https://forhumanity.center/ Website: https://babl.ai/ Linkedin: https://www.linkedin.com/company/babl-ai/ Courses: https://courses.babl.ai/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sun, January 29, 2023
Today on Lunchtime BABLing, Shea reflects on recent meetings, events, and announcements, including: ✅ Public hearing for NYC Local Law No. 144 ✅ European Commission Workshop on auditing for the DSA ✅ New AI laws and guidelines (e.g. NIST AI RMF, NJ, and NY laws) Shea follows it up with his thoughts on the training needed for AI auditing, and why 2023 is when AI and Algorithm Auditing goes mainstream. Website: https://babl.ai/ https://www.linkedin.com/company/babl-ai/ Courses: https://courses.babl.ai/Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sun, January 22, 2023
On this episode of Lunchtime BABLing, Shea talks about AI Audit & Assurance, and where it fits into the emerging regulatory landscape. ✅ What laws, regulations, and guidelines are driving the need for AI audit and assurance? ✅ What the ecosystem looks like, and where I think it's going (he might mention your company here)? ✅ What is different about algorithm auditing as compared to other types of audit and assurance? Courses: https://courses.babl.ai/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sun, January 15, 2023
How can you apply the skills you already have to the emerging field of AI ethics, governance, and policy consulting? In this edition of Lunchtime BALBing, Shea Brown talks about his experience and thoughts on finding your unique niche in the industry. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sun, January 08, 2023
On this week's Lunchtime BABLing, we're talking with Merve Hickok, a leading voice in AI policy and regulation. We discuss the future of AI regulation, especially the EU AI Act. Topics include: 1: How can regulations best protect fundamental rights? 2: What will regulations require of companies and governments? 3: Why are responsible AI practices crucial for businesses? 4: What can companies do now to ensure they're on the right path? Merve's LinkedIn: https://www.linkedin.com/in/mervehickok/ Free resources for Responsible AI: https://www.aiethicist.org/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sun, January 01, 2023
Last week the latest updates to the New York City’s Bias Audit Law for Automated Employment Decision Tool (AEDT), also known as NYC Local Law 144, were released. Join this weeks episode of Lunchtime BABLing with BABL AI CEO, Dr. Shea Brown, as he breaks down these latest changes to the NYC Bias Audit Law. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sun, December 11, 2022
In today's episode of Lunchtime BABLing, Shea Brown and invites Borhane Blili-Hamelin, PhD to discuss some surprising parallels between the challenge of putting AI ethics into practice in industry versus research settings! Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Fri, November 18, 2022
In this weeks episode, BABL AI’s CEO Shea Brown discusses what a process audit is, and how it can be used to verify disparate impact testing conducted by employers and vendors. New York City’s Local Law 144 requires independent bias audits for automated employment decision tools (AEDT) used to substantially assist or replace decisions in hiring or promotion. The law comes into effect on Jan 1, 2023. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sun, November 13, 2022
This week CEO Dr. Shea Brown reflects on his time at the Algorithmic Auditing International Conference by Eticas. Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Fri, November 04, 2022
A number of forthcoming laws and regulations that will govern the use and development of AI will require mandatory risk or impact assessments. This week BABL AI’s CEO Shea Brown discusses what an ethical risk assessment is, and how your organization can implement them today.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Thu, October 27, 2022
New York City’s Local Law 144 requires independent bias audits for automated employment decision tools (AEDT) used to substantially assist or replace decisions in hiring or promotion. Despite the law coming into effect on Jan 1, 2023, several aspects of the bias audit have still to be clarified. In our second weekly mini-webinar series, BABL AI’s CEO Shea Brown discusses what an algorithmic bias audit is, including: 1. Bias audit basics 2. Differences between employer and vendor audits 3. Open Q&A sessionCheck out the babl.ai website for more stuff on AI Governance and Responsible AI!
Thu, October 20, 2022
New York City’s Local Law 144 requires independent bias audits for automated employment decision tools (AEDT) used to substantially assist or replace decisions in hiring or promotion. Despite the law coming into effect on Jan 1, 2023, several aspects of the bias audit have still to be clarified. This week we’re discussing the amendments to the upcoming NYC hiring law and what they might entail for vendors, employers, and more.Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
Sat, September 10, 2022
The Algorithmic Bias Podcast, presented by Babl AI, covers everything related to algorithmic bias, auditing, governance, ethics and more. You can find a new episode uploaded every week on all major platforms, as well as a video recordings on the Babl AI YouTube channel. Please follow us on social media to stay up to date with new episode. Babl AI is a leading boutique consultancy that focuses on responsible AI governance, algorithm risk and impact assessments, algorithmic bias assessments and audits, and corporate training on responsible AI. We combine leading research expertise and extensive practitioner experience in AI and organizational ethics to drive impactful change at the frontier of technology for our clients. babl.ai https://www.youtube.com/channel/UCabVe81x_XHoGGDXTIWQSJQ https://www.linkedin.com/company/babl-ai/ Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
loading...