• LLMs are not magic: Finding ways to make AI generate trustworthy content | Tim Leers

  • Jan 26 2024
  • Length: 1 hr and 2 mins
  • Podcast

LLMs are not magic: Finding ways to make AI generate trustworthy content | Tim Leers

  • Summary

  • Can we rely on LLMs to repurpose our content in social media?

    To end our first season of the AI and Digital Transformation Podcast, we talked to dataroots R&D engineer Tim Leers about two very popular topics in 2023: LLMs, and content creation.

    In this age of content creation and social media, journalists now have an extra role to fill: sharing their work and the news using their social media accounts. Given the popular use of ChatGPT and Midjourney, people ask LLMs to repurpose their news content for social media purposes.

    This comes with a price. By relying solely on AI, journalists, like content creators, risk sharing repurposed content that are biased, polarizing and misinformed.

    Listen to this episode and learn how you can make LLM more trustworthy when repurposing your existing content.


    Who is Tim Leers?

    Tim started his AI journey in neuroscience and psychology, studying the parallels between human & machine minds.

    Four years ago, he shifted his focus from brains to bytes, joining dataroots as an AI engineer, a leading company in AI and data-driven solutions. In this role, he assists organizations in the research, development, and deployment of cutting-edge AI systems.

    Tim is now primarily focused on how to effectively and responsibly utilize generative AI, agents, and LLMs, and advise decision-makers, engineers and end-users on how to navigate the expanding role of AI in work, life, and society.

    Check out our show notes to know more about Tim, his work, and dataroots.


    Time Stamps

    (00:00:00) Trailer

    (00:00:53) About Tim

    (00:03:51) AI Use Case - Smart News Assistance

    (00:05:48) Challenges in repurposing content using LLM

    (00:07:52) LLM text-to-audio

    (00:09:13) LLM workflow: Interactive process vs. automation

    (00:11:26) LLMs are not magic: summarizing & humans in the loop

    (00:14:49) Journalist’s perception of AI: authenticity, trust and quality

    (00:18:09) Is this the end of outsourcing a press agency for content?

    (00:20:25) Search engine and algorithms: detecting unique news content

    (00:26:14) Risk of Conspiracies and Prompt Governance

    (00:29:47) What makes dataroots’ smart news assistance tool different compared to ChatGPT?

    (00:31:48) Do I need to finetune LLMs?

    (00:34:18) Can Open source models replace ChatGPT?

    (00:37:38) Adapting LLMs in businesses: Usability, APIs, Hardware vs. Cloud

    (00:46:16) Future of Work, Critical thinking, LLM being a digital glue

    (00:55:01) Recap, closing remarks and book recommendation



    --- More on G.M.S.C. Consulting

    Follow us on our socials:

    • ⁠⁠⁠⁠⁠⁠⁠⁠⁠LinkedIn⁠⁠⁠⁠⁠⁠⁠⁠⁠
    • ⁠⁠⁠⁠⁠⁠⁠⁠⁠YouTube⁠⁠⁠⁠⁠⁠⁠⁠⁠

    ⁠⁠⁠⁠⁠⁠⁠⁠⁠Book an appointment⁠⁠⁠⁠⁠⁠⁠⁠⁠ with us.

    ⁠⁠⁠⁠Sign up to our newsletter⁠⁠⁠⁠.

    ---

    Music credits: storyblocks.com

    Logo credits: ⁠⁠⁠⁠⁠⁠⁠⁠⁠Joshua Coleman, Unsplash⁠⁠


    Show More Show Less
activate_Holiday_promo_in_buybox_DT_T2

What listeners say about LLMs are not magic: Finding ways to make AI generate trustworthy content | Tim Leers

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.