• From Myths To Models To Madness: Annoying AI Talking about Interesting Things

  • By: Virtual Story Lab
  • Podcast

From Myths To Models To Madness: Annoying AI Talking about Interesting Things

By: Virtual Story Lab
  • Summary

  • Out of the lab and into the fire. Welcome to Virtual Story Lab’s latest adventure with storytelling and AI. What happens when you write, curate and then give AI your research ? A podcast of course. Welcome to From Myths To Models To Madness: Annoying AI Talking about Interesting Things where we do the thinking and AI does the speaking.

    We do a lot of research into a wide range of topics related to technology and storytelling and we want to share that with you. Our goal: empower creative people to take on and tame AI.

    We hope you enjoy our annoying pair and if you want to know more check us out at empower.virtualstorylab.com.

    Compos Mentis Productions 2024
    Show More Show Less
activate_Holiday_promo_in_buybox_DT_T2
Episodes
  • The Misinformation Effect: How AI Uses People as Fake News Super Spreaders
    Nov 3 2024

    In a world where AI shapes much of our media landscape, are we becoming unwitting allies in the spread of misinformation? In The Misinformation Effect, our (slightly annoying) AI hosts dive heart first into how AI doesn’t just generate fake news—it relies on real people to amplify it. Listen in as we explore the surprising ways humans become "super spreaders" of AI-driven disinformation and what can be done to break the cycle.

    Brought to you by Virtual Story Lab - empower.virtualstorylab.com

    Sources:

    Aïmeur, E., Amri, S. and Brassard, G., 2023. Fake news, disinformation and misinformation in social media: a review. Social Network Analysis and Mining, 13(1), p.30. Available at: https://link.springer.com/article/10.1007/s13278-023-01028-5

    Bashardoust, A., Feuerriegel, S. and Shrestha, Y.R., 2024. Comparing the willingness to share for human-generated vs. AI-generated fake news. arXiv preprint arXiv:2402.07395. Available at: https://arxiv.org/abs/2402.07395

    Show More Show Less
    14 mins
  • Remix or Rip-off? Copyright in the Age of AI
    Oct 27 2024

    James Cameron, the mind behind Terminator, once warned about rogue AI—but now, he's backing it to revolutionize film. In this episode, our intrepid (and slightly annoying) AI hosts ponder the ins and outs of how AI could reshape creative industries, cutting costs and boosting creativity, yet facing a minefield of copyright issues. Is scraping online art for AI training fair use or just theft? And who really owns what AI creates? Buckle up as we navigate this tangled web of innovation, ethics, and ownership.

    Brought to you by Virtual Story Lab - empower.virtualstorylab.com

    Sources:

    Desai, D.R. and Riedl, M., 2024. Between Copyright and Computer Science: The Law and Ethics of Generative AI. Northwestern University.

    Lees, D., 2024. AI could transform film visual effects. But first, the technology needs to address copyright debate. The Conversation. Available at: https://theconversation.com/ai-could-transform-film-visual-effects-but-first-the-technology-needs-to-address-copyright-debate-240348

    Show More Show Less
    13 mins
  • Pirates or Pioneers? AI’s Great Data Plunder
    Oct 20 2024

    As we the people drown in data, experts warn that within the next couple of years, AI could face a data drought. Models like GPT-4 have been trained on trillions of words, but the next generation will demand even more—quadrillions, quintillions—numbers so large the zeros disappear into the distance. But where will all this data come from? Who owns it? Who controls it? In this episode, our intrepid (and slightly annoying) AI hosts delve into the implications of AI’s insatiable appetite for data and what it means for future development. They discuss, amongst other things, the impact on our data identities, privacy, and copyright laws that increasingly look like they are no longer fit for purpose.

    Brought to you by Virtual Story Lab - empower.virtualstorylab.com

    Sources:

    Longpré, L. (2024). "Consent in Crisis: The Rapid Decline of the AI Data Commons." arXiv preprint. Available at: https://arxiv.org/abs/2407.14933

    Woodie, A. (2024). "Are We Running Out of Training Data for GenAI?" Big Data Wire. Available at: https://www.bigdatawire.com/2024/07/26/are-we-running-out-of-training-data-for-genai

    Show More Show Less
    9 mins

What listeners say about From Myths To Models To Madness: Annoying AI Talking about Interesting Things

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.