​
"We can set aside the Rosetta Stone and just speak English to a computer. They will likely understand just as well as when speaking to them in Python. This immediately presents two choices: We can get lazy, or we can elevate our thought."
- Marco Argenti
“If you want to pursue a career in engineering, you should focus on learning philosophy in addition to traditional engineering coursework.”
That’s Marco Argenti, the Chief Information Officer at Goldman Sachs, one of the most well-known financial services companies in the world. The quote is taken from advice he gave to his daughter, who is in college.
It’s one thing for me to talk about the importance of philosophy for other disciplines like engineering; you would expect that. But to hear a computer engineering specialist in an elite financial services industry talk about the importance of philosophy gives some added credibility to the claim that philosophy’s value might start getting more attention.
He adds:
Coming from an engineer, [learning philosophy] might seem counterintuitive, but the ability to develop crisp mental models around the problems you want to solve and understanding the why before you start working on the how is an increasingly critical skill, especially in the age of AI.
In the context of performing well at a job that uses AI to code, Argenti understands that the quality of critical thinking skills that the worker has or lacks will dramatically affect the AI inputs, and the inputs will dramatically affect the AI outputs.
But here’s the part of Argenti’s piece that struck me:
One of the most important skills I’ve learned in decades of managing engineering teams is to ask the right questions. It’s not dissimilar with AI: The quality of the output of a large language model (LLM) is very sensitive to the quality of the prompt. Ambiguous or not well-formed questions will make the AI try to guess the question you are really asking, which in turn increases the probability of getting an imprecise or even totally made-up answer (a phenomenon that’s often referred to as “hallucination”). Because of that, one would have to first and foremost master reasoning, logic, and first-principles thinking to get the most out of AI — all foundational skills developed through philosophical training. The question “Can you code?” will become “Can you get the best code out of your AI by asking the right question?”
The more AI becomes ubiquitous and even necessary for some jobs and careers, the more critical it will be to cultivate and master reasoning, logic, and first-principles: foundational skills developed through philosophical training.
But the reason philosophical training is getting more attention with the rise of AI has everything to do with people’s wallets: jobs, earnings, careers, etc. Follow the money. People are motivated to care about philosophical prompts and skills when it affects their performance, prospects, and bottom line.
So we will continue to see some people start to value philosophical training because of how it helps them communicate with AI.
But here’s the thing: the more valuable benefits of philosophical training have been in front of them the whole time.
Reasoning, logic, first-principles, and other foundational skills developed through philosophical training have a positive impact on the inputs when communicating with AI, right?
Wouldn’t those same foundational skills have a positive impact when communicating with…real, actual people?
In the quote above, Argenti says,
Ambiguous or not well-formed questions will make the AI try to guess the question you are really asking, which in turn increases the probability of getting an imprecise or even totally made-up answer (a phenomenon that’s often referred to as “hallucination”).
In the flesh and blood world of person-to-person communication, ambiguous or not well-formed questions and statements will make the other person try to guess the question or statement you are really asking or claiming, which in turn increases the probability of getting an imprecise or even totally made-up answer.
People generally want to avoid miscommunication, whether in professional settings or within a romantic, friend, or family relationship.
How frustrating is it when you're in a conversation that feels more confusing than clarifying? When you can't quite find the right way to say what you want to say?
Two of the most helpful benefits of philosophical training for me have been:
1) Improved mental clarity and critical thinking, and
2) Improved precision and clarity in interpersonal communication.
Of course I still lack substantially in both of those areas, but there is no question I’ve experienced at least some improvement in both of those categories. And it's a nice feeling when you can open and clarify communication lines.
Maybe some people just need a different context—AI prompts—to take a second look and get some motivation for up-leveling those foundational, critical thinking skills we all need to avoid imprecise and made-up responses in our real world communication.
The discipline of philosophy has been working on those skills for a few thousand years, so it could be the perfect time for philosophical training to have its day, as its value gets more attention.
Until next time.
Jared
​
P.S. One way to get some of those foundational skills is through my Introduction to Logic course.
This Week's Free Philosophy Resource:
Title: Knowledge and Merely Predictive Evidence​
Author: Haley Schilling Anderson
Reading Level: Undergraduate
What's the difference between legal "proof beyond a reasonable doubt" and knowledge? Here's the abstract of the paper:
A jury needs “proof beyond a reasonable doubt” in order to convict a defendant of a crime. The standard is vexingly difficult to pin down, but some legal epistemologists have given this account: knowledge is the standard of legal proof. On this account, a jury should deliver a guilty verdict just in case they know that the defendant is guilty. In this paper, I’ll argue that legal proof requires more than just knowledge that a defendant is guilty. In cases of “merely predictive evidence,” a jury knows that the defendant is guilty but does not have legal proof. What are they missing? Evidence that is causally downstream from the crime. Legal proof requires a “smoking gun.” The point generalizes outside of the courtroom. A professor needs to read a term paper before assigning a grade, even if she knows the student will produce A + work. You may know that your roommate will forget to water the plants while you are away—she is scatterbrained and always forgets these things—but you can’t blame her until you get back home and see that the plants are wilting. In order to have appropriate reactions or reactive attitudes, we must respond causally to what other people have done.
​
Missed a week?
You can access all previous newsletters on my Creator Profile here.
I am a proud affiliate of Kit, the newsletter service I use to send this out weekly. If you are interested in creating your own newsletter, I couldn't recommend it more highly. Click here to get started using my affiliate link!
If you like listening to just audio in the car, on a run, or while you're supposed to be working, subscribe to the podcast so you never miss an episode:
If you like watching the conversation, subscribe, and the latest episode will show up in your feed. (Extra credit: like whatever videos you watch if you genuinely like what you're hearing.)
Take a sec to follow us on
X: https://x.com/sellingplato​
TikTok: https://www.tiktok.com/@selling.plato​
Instagram: https://www.instagram.com/sellingplato/​
Facebook: https://www.facebook.com/sellingplato​
LinkedIn: https://www.linkedin.com/company/sellingplato​
Threads: https://www.threads.net/@sellingplato​
​
🏛️ If you're ready to get started learning logic, I offer a low-cost, subscription-based course. You can try it free for a week and see what you think:
Selling Plato's Dialogues
If you think someone else will like this Dialogues newsletter, please forward it along to your friends and family!
If you received this email as a forward, click to subscribe!