This is a question I, like many of you, have been asking.
I think we finally have the answer for why one of the most brilliant minds of all time at OpenAI left the billion-dollar company to take the risk and build his own.
OpenAI’s former co-founder Ilya Sutskever saw the upcoming breakthroughs, and potential huge safety concerns, with QStar and self-taught reasoning.
A “chain of thought” is key for AI’s reasoning and interpretability, as shown by Google researchers in 2022.
OpenAI’s O1 model excels at persuasion, but here’s the huge issue with that breakthrough – in deception testing, no less than .8 of its very persuasive thoughts were flagged as intentionally deceptive.
It KNEW, and was actively trying, to deceive the human user. 🤯
Most of you are already hip on this (critical thinkers unite!): but my advice is not to blindly trust o1 on its output.
….
Do this instead:
→ BECOME a subject matter expert rooted in your passions. But don’t treat what you love as a hobby. Get serious about becoming an expert in it. (More on this in upcoming videos)
→ Learn, learn, learn. (Be humble enough to realize you don’t know better, and then, learn from the greats)
→ Practice makes perfect. Really, really, KNOW your shiz. If you say you’re a good copywriter, become one. If you say you can sell, become a good salesperson. If you say you can grow food, then grow it. TEST your knowledge.
→ Then…
→ Test the LLM answers against your expert knowledge.
I’ve never been more glad to be an expert entrepreneur and copywriter… because I can clearly call the AI model on it’s persuasive sh&*.
…..
Q*Star original paper: https://arxiv.org/pdf/2203.14465
Follow me:
https://x.com/juliaemccoy
https://www.linkedin.com/in/juliaemccoy/
Get my free worksheet, How to Discover Your Passion & Map it to AI Skills: https://www.juliamccoy.ai/