Hosting your own LLMs like Llama 3.1 requires INSANELY good hardware – often times making running your own LLMs completely unrealistic. But I have a strategy that I reveal in this video for how to start cheap with self-hostable LLMs and then continue on to scale with them infinitely as your app/business grows…
00:00 – 02:58 – The Problem with Local LLMs
02:59 – 03:35 – The Strategy for Local LLMs
03:36 – 08:02 – Exploring Groq’s Amazingness
08:03 – 13:59 – The Groq to Local LLM Quick Maths
14:00 – 14:43 – Outro
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Services I mentioned in this video (I am not sponsored by any of them):
Groq: https://groq.com/
RunPod: https://www.runpod.io/
DigitalOcean: https://www.digitalocean.com/
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Artificial Intelligence is no doubt the future of not just software development but the whole world. And I’m on a mission to master it – focusing first on mastering AI Agents.
Join me as I push the limits of what is possible with AI. I’ll be uploading videos at least two times a week – Sundays and Wednesdays at 7:00 PM CDT! Sundays and Wednesdays are for everything AI, focusing on providing insane and practical educational value. I will also post sometimes on Fridays at 7:00 PM CDT – specifically for platform showcases – sometimes sponsored, always creative in approach!
source