Can an LLM Teach You How to Make Chocolate? | #PodSaveChocolate Ep 132

Episode 132 of #PodSaveChocolate was inspired by a post in the r/chocolate sub on Reddit. Let’s take a look at what the OP (referring to ChatGPT “advice”) posted, how the community responds, and then ask other LLMs and ChatGPT to chime in and explore what LLMs can teach us about making chocolate.
When and Where to Watch
Links below to watch LIVE and to view the archived episode.
Click on this (shareable) link to watch on YouTube. Please subscribe (free!) to the @PodSaveChocolate YouTube channel, like this video, comment, and share this episode to help grow the #PSC community.
Watch and comment LIVE or view the archived episode on LinkedIn. Join my network on LinkedIn to receive notifications and to refer business to each other.
Watch and comment LIVE or view the archived episode on TheChocolateLife page on Facebook (for 30 days, then watch the archive on YouTube).
Follow TheChocolateLife on Facebook to receive notifications and catch up on other content.
Episode 132 Overview
My experience using LLMs (e.g., ChatGPT, Google Gemini) to answer questions about making chocolate is that they’re a lot like Goldilocks and the Three Bears, where most of the answers are either “too cold” (sparse) or “too hot” (overwhelming) ⋯ only very rarely are they just right.
This episode of PodSaveChocolate was inspired by a post in the r/chocolate subreddit.
Common mistakes in making chocolate is title of this post. Let’s investigate.
The TL;DR — No. LLMs cannot teach you how to make chocolate. At best they can give you an overview of the process.
And - be careful, which LLM you use, the prompt you use, the “mode” the LLM is in, and even the interface to the LLM all influence the “answers” you will get.
In this episode of #PodSaveChocolate we will continue to explore the limits ⋯ and quirks ⋯ of LLMs when it comes to exploring topics in chocolate.
Goldilocks and the Three LLMs
In this episode I query three LLMs (ChatGPT, Gemini, and Mistral) using four different vectors, or points of entry.
This is the prompt I used for each LLM:
I want to make 1 kilo of milk chocolate from already fermented and dried cocoa beans. What are the steps? What equipment do I need? What is a good recipe?
In particular, I am going to point out how two different vectors into ChatGPT, one via a third-party and the other direct, return very different results. I am also going to show how Gemini generates two very different responses based on how you “engage” it.
The results represent very different philosophies about how the designers think about the nature of human-LLM interaction and how the “conversations” are mediated and moderated.
Why?
Because what you ask (the “prompt”), which LLM you ask, and the vector you use to get answers, all matter.


Images generated by the SORA/ChatGPT chatbot using the prompt: “Can An LLM Teach You How To Make Chocolate?”
More “Answers” and other resources
DuckDuckGo appears to use the Mistral LLM for its AI Assist (it explicitly uses Mistral for its duck.ai UI/UX), and Mistral provides different takes.



Questions?
If you have questions or want to comment, you can do so during the episode or, if you are a ChocolateLife member, you can add them in the Comments below at any time.
Episode Hashtags and Socials
#Reddit #LLM #ChatGPT #GoogleGemini #Mistral
#cocoa #cacao #cacau
#chocolate #chocolat #craftchocolate
#PodSaveChoc #PSC
#LaVidaCocoa #TheChocolateLife
Future Episodes
#PodSaveChocolate and #TheChocolateLifeLIVE Archives
To read an archived post and find the links to watch archived episodes, click on one of the bookmark cards, below.



Audio-only podcasts