Mustafa Sualp
Back to all insights

What Older Philosophers Can Still Teach AI Builders

Jung, Descartes, and Kant are useful for AI builders when they help us stay humble about meaning, doubt, judgment, and human agency.

Mustafa SualpMustafa Sualp
April 17, 2025
6 min read
AI Collaboration
What Older Philosophers Can Still Teach AI Builders

AI makes old philosophical questions feel newly practical.

What counts as understanding? Where does judgment come from? How should we handle uncertainty? What is the difference between useful imitation and meaning? What should remain human-owned?

These are not academic questions when you are building software people will trust with real work.

Jung, Descartes, and Kant do not give us a product roadmap. But they do give us useful warnings.

Jung: symbols matter

Jung cared about the symbols and patterns that shape human meaning.

AI systems are trained on enormous amounts of human language and culture. They can recombine familiar symbols, stories, roles, and metaphors with surprising fluency.

That does not mean AI has an unconscious.

It means builders need to understand that AI output is not neutral. Language carries associations. Metaphors shape expectations. A product that calls an AI "teammate," "companion," or "agent" changes how people interpret its authority and responsibility.

The builder lesson: choose language carefully.

If the system is assisting, say assisting. If it is suggesting, say suggesting. If it needs approval, make that visible.

Descartes: doubt is a feature

Descartes' method of doubt is useful for AI work because AI can sound certain when it should not.

The practical question is not whether a machine "thinks" in the deepest philosophical sense. The practical question is how a human should evaluate a machine-generated claim.

Healthy AI systems should make doubt easier:

  • What is the source?
  • What assumption is being made?
  • What confidence does the system have?
  • What would change the answer?
  • What still requires human review?

AI should not remove skepticism from work. It should give skepticism a better interface.

Kant: judgment needs rules and limits

Kant reminds us that knowledge is structured and ethics requires principles.

For AI builders, this points to governance. A model does not become responsible because it is fluent. A workflow does not become safe because it is efficient.

Systems need constraints:

  • permission boundaries,
  • role clarity,
  • approval paths,
  • auditability,
  • correction mechanisms,
  • privacy rules.

These are not afterthoughts. They are part of the product's moral shape.

The shared lesson

The shared lesson from these older thinkers is humility.

AI can generate, synthesize, classify, summarize, and suggest. It can participate in work in ways previous software could not.

But meaning, responsibility, and values still have to be handled carefully by people.

That is why this design direction matters to me: shared context, room-aware agents, durable outputs, and bounded action are not just product features. They are ways of keeping human judgment visible as AI becomes more capable.

Philosophy as a builder discipline

The point is not to turn every product conversation into a seminar.

The point is to ask better builder questions:

  • Are we making the AI sound more authoritative than it is?
  • Are we preserving the reasoning people need to trust the output?
  • Are we giving users the ability to correct the system?
  • Are we protecting private context from becoming shared memory by accident?
  • Are we making human approval visible where it matters?

Those questions are practical.

And they are old.

That is why the older philosophers still matter.

Mustafa Sualp

About Mustafa Sualp

Founder & CEO, Sociail

Mustafa is a serial entrepreneur focused on reinventing human collaboration in the age of AI. After a successful exit with AEFIS, an EdTech company, he now leads Sociail, building the next generation of AI-powered collaboration tools.