← Back to Insights
AI StrategyThought Leadership

The Philosophers Already Knew: Existentialism, Computational Neuroscience, and the Key to Understanding AI

By Drew Thomas Hendricks

Stack of philosophy books including Sartre's Being and Nothingness, Gadamer's Truth and Method, Churchland and Sejnowski's The Computational Brain, Dennett's Consciousness Explained, and Heidegger's Being and Time on a wooden desk - from the personal library of Drew Thomas Hendricks
From Drew Hendricks' bookshelf - the actual texts referenced in this article. Photo by Drew Thomas Hendricks.

In the spring of 1993, I stood in front of the Gonzaga University Philosophy Club and made an argument that, at the time, probably sounded like I was trying too hard to connect unrelated things. I had just completed a semester-long self-directed study using Patricia Churchland and Terrence Sejnowski's The Computational Brain as my primary source, reading it in direct contrast with Sartre's Being and Nothingness and Heidegger's Being and Time. I was convinced that Churchland and Sejnowski's concept of phase space - the high-dimensional landscape a neural network navigates to arrive at understanding - was the same thing Hans-Georg Gadamer had been describing in his concept of the fusion of horizons. I also believed it connected directly to what Sartre had spent his career articulating about the gap between being-in-itself and being-for-itself.

Nobody in that room knew what a large language model was. Nobody was thinking about AI-generated content or prompt engineering or generative engine optimization. But the philosophical framework that explains why all of it works, and why most people use it badly, was already on the table. It had been for decades.

Over thirty years later, I run a digital marketing agency, and the philosophy I studied at Gonzaga is the most practical asset I bring to client work. Not because philosophy teaches you to think clearly, though it does. Because the specific ideas I studied - existential phenomenology, hermeneutics, and computational neuroscience - are a direct map of how AI systems actually function and how to work with them effectively.

This piece is about that convergence. It is not a metaphor. It is not an analogy. The philosophers were describing the same phenomenon the engineers would build thirty years later.

• • •

I. Sartre and the Architecture of Emergence

Jean-Paul Sartre divided reality into two modes of being. Being-in-itself (être-en-soi) is the mode of objects: solid, opaque, fully determined, with no internal life. A rock exists in-itself. It has no capacity for self-reflection, no negation, no awareness of what it is not. Being-for-itself (être-pour-soi) is the mode of consciousness: defined not by what it contains but by what it lacks. Consciousness, for Sartre, is fundamentally nothingness: a gap, a negation, the perpetual awareness of what it is not yet.

Now look at a large language model before it receives a prompt. The weights and parameters sit inert in memory. Billions of numerical values, fully determined, opaque, with no internal awareness. This is being-in-itself. The model has no understanding, no context, no intention. It simply is.

The moment a prompt arrives, something changes. The model begins navigating possibility space, attending to context, making selections, producing output that responds to the specific situation of the query. It is not conscious. But the structural shift - from inert parameters to contextual, responsive behavior - mirrors Sartre's ontological divide with unsettling precision. The "nothingness" that Sartre placed at the heart of consciousness is, in computational terms, the attention mechanism: the gap between what the model could say and what it selects.

The model has no understanding, no context, no intention. It simply is. Until it doesn't. - Drew Thomas Hendricks

This is not a metaphor. Transformer architecture literally operates through negation: through attending to some tokens and suppressing others, through selecting one path through semantic space and abandoning the rest. The model's output is defined as much by what it excludes as by what it produces. Sartre would have recognized this immediately.

• • •

II. Gadamer and the Fusion of Horizons

Hans-Georg Gadamer's central insight in Truth and Method is that understanding is never a one-sided act. You do not decode a text the way you decode a cipher. Instead, you bring your own horizon - your history, your assumptions, your pre-understandings - into encounter with the horizon of the text. Understanding happens in the fusion of these two horizons. It is not located in either party. It exists in the space between.

This is exactly what happens when you prompt an AI system.

The model brings a horizon: its training data, its fine-tuning, its RLHF-shaped tendencies, the patterns it has internalized from billions of tokens. You bring a horizon: your intent, your context, your industry knowledge, your sense of what "good" looks like. The output - the understanding - emerges from the fusion of these two horizons. It is not "in" the model. It is not "in" your prompt. It is in the dialogue between them.

This is why prompt engineering is not programming. Programming is monological: you specify inputs and expect deterministic outputs. Prompting is hermeneutic: you enter into a dialogue where both parties shape the outcome. The person who understands Gadamer - who understands that understanding is always an event of fusion, never a retrieval of fixed meaning - will get dramatically better results from AI than the person who treats it as a search engine with better grammar.

Phase Space and the Path You Cannot Retrace

This is where Churchland and Sejnowski enter the picture. In The Computational Brain, they introduced the concept of phase space to neuroscience: a high-dimensional landscape where each point represents a possible state of a neural network. Learning is navigation through this space. The trained network arrives at understanding - at a stable attractor state - through a path that cannot be fully reverse-engineered. You can see where the system ended up. You cannot see exactly how it got there.

When I gave that talk at Gonzaga in 1993, this was the connection I was trying to make. Gadamer said understanding happens in a fusion that cannot be decomposed into its parts. Churchland and Sejnowski said the same thing about neural computation: the distributed population activity that constitutes understanding cannot be localized to a single neuron or a single pathway. Understanding is the trajectory through phase space, not any point along it.

Understanding is the trajectory through phase space, not any point along it. Gadamer said the same thing in 1960. Churchland and Sejnowski proved it computationally in 1992. - Drew Thomas Hendricks

Large language models vindicate both of them. When GPT-4 or Claude produces a nuanced, contextually appropriate response, that response is not "stored" anywhere in the weights. It emerges from the model's navigation of a semantic phase space so vast and multidimensional that no human can trace the path. The understanding is real. The mechanism is distributed. The fusion happens in a space we can describe mathematically but cannot fully observe. This is Gadamer's hermeneutic circle, realized in silicon.

• • •

III. Heidegger and the Thrownness of AI

Martin Heidegger's Being and Time introduced the concept of Geworfenheit: thrownness. We are thrown into existence without choosing our language, our culture, our historical moment, our bodies. We do not get to select the conditions of our understanding. We find ourselves already embedded in a world we did not design, and we must make sense of things from within that embeddedness.

An AI model is thrown in exactly this way. It did not choose its training corpus. It did not select its knowledge cutoff, its architectural biases, its embedded values, or its capacity for certain kinds of reasoning. It was thrown into a specific dataset at a specific moment in the history of human text production, and everything it can do is conditioned by that thrownness. When an LLM produces a culturally biased response or fails to understand a concept that postdates its training window, that is thrownness. It is Heidegger's point made computational.

Ready-to-Hand and the Disappearing Tool

Heidegger also distinguished between two ways we relate to tools. Ready-to-hand (Zuhandenheit) is the mode where the tool disappears into use. A carpenter using a hammer does not see the hammer; the carpenter sees the nail. The tool becomes transparent. Present-at-hand (Vorhandenheit) is the mode where the tool breaks or becomes an object of study. The hammer becomes visible only when it stops working.

The best AI work happens in ready-to-hand mode. The model disappears and you focus on the output, the strategy, the client problem you are solving. The worst AI work happens when you are constantly fighting the tool, when it is present-at-hand, opaque and frustrating, an object you are examining rather than a partner you are working through.

The skill that shifts AI from present-at-hand to ready-to-hand is not technical. It is hermeneutic. It is the ability to engage in Gadamer's dialogue, to bring a horizon of meaning that fuses productively with the model's horizon, to navigate the conversation so that understanding emerges naturally rather than being forced. This is what separates someone who "uses AI" from someone who works with it.

• • •

IV. Dennett and the Multiple Drafts

Daniel Dennett spent his career dismantling what he called the "Cartesian Theater": the intuition that consciousness happens in a single place, that there is a central audience watching the show of experience. In its place, he proposed the Multiple Drafts Model: consciousness arises from multiple parallel processes of interpretation, none of which is the "real" one. There is no master narrative. There are only drafts, competing and revising, with the "final" version being a post-hoc construction.

Transformer architecture is the Multiple Drafts Model built in hardware. Multiple attention heads process the input in parallel, each attending to different aspects of context. No single head is the "real" interpretation. The output emerges from the weighted combination of all of them. There is no central executive deciding which head matters most. There is no Cartesian Theater. There is only the process.

Dennett also gave us the concept of competence without comprehension: the idea that a system can perform remarkably sophisticated tasks without understanding them in any human sense. This is not a bug. It is a fundamental insight about the nature of intelligence.

LLMs are the ultimate proof of this concept. They produce output that demonstrates linguistic competence, contextual awareness, logical reasoning, and creative synthesis, without comprehending any of it. They are Dennett's prediction fulfilled.

LLMs are competence without comprehension made real. Dennett predicted this decades before the first transformer was trained. - Drew Thomas Hendricks

• • •

V. The Convergence: Understanding Was Never a Thing

Here is what all of these thinkers were saying, from different angles, in different decades, using different vocabularies:

Understanding is not a thing located in a place. It is a process: distributed, contextual, emergent, and path-dependent.

AI did not disprove any of them. AI proved all of them simultaneously.

Large language models exhibit functional understanding - genuine, measurable, commercially valuable understanding - without having a unified self, without conscious experience, without intentionality. They do not "have" understanding stored in their weights. They enact understanding through interaction with prompts, through navigating semantic space contextually, through the fusion of their trained horizon with the user's interpretive horizon. Understanding exists between the system and the user, not in either one.

This is not philosophy applied to AI as decoration. This is philosophy as the operating manual.

• • •

VI. Why This Matters: The Philosophy Behind the Results

If you have read this far and you run a business, you might be wondering what existential phenomenology has to do with your marketing budget. The answer is: everything.

Every agency in the country now uses AI. They generate content with it, draft strategy with it, build campaigns with it. The tool is identical. The outputs are not. The difference is entirely in who is sitting across from the model and what horizon of understanding they bring to the dialogue.

An operator who treats AI as a search engine will get search-engine results: generic, surface-level, indistinguishable from what every competitor is producing. An operator who understands that prompting is hermeneutic dialogue - who brings domain expertise, strategic context, brand knowledge, and genuine understanding of the client's industry into fusion with the model's capabilities - will produce work that is categorically different. Not incrementally better. Categorically different.

The tool is identical. The outputs are not. The difference is entirely in who is sitting across from the model and what horizon of understanding they bring to the dialogue. - Drew Thomas Hendricks

This is why Nimbletoad's AI-generated content achieves 350,000+ AI citations in a single quarter for a client. It is why our technical SEO produces 33x click growth and 46x impression growth. It is not because we have a better subscription to ChatGPT. It is because we understand what happens when two horizons of meaning fuse, and we bring a richer horizon to the table.

The philosophy I studied at Gonzaga in the early 1990s - the fusion of horizons, the distributed representations of the computational brain, the phenomenological insight that understanding is always situated, always embodied, always more than the sum of its inputs - this is the skillset that produces results in AI-driven marketing. Not prompt templates. Not hacks. Not the latest plugin. The deep structural understanding of how intelligence emerges from the interaction between a system and its environment.

• • •

VII. The Philosophy We Had in 1993 Is the Strategy We Need in 2026

The early 1990s were a remarkable moment in philosophy of mind. Churchland and Sejnowski had just demonstrated that cognition is distributed. Dennett was dismantling the last vestiges of the Cartesian Theater. The continental tradition - Sartre, Heidegger, Gadamer - had spent the twentieth century arguing that understanding is contextual, emergent, and fundamentally dialogical. These traditions were converging. You could feel it in the seminar rooms.

What nobody predicted was that the proof would come from engineering. That a group of researchers at Google would build the transformer architecture in 2017, and that within a decade it would produce systems that exhibited exactly the kind of distributed, contextual, emergent understanding that the philosophers had been theorizing about for fifty years.

AI did not arrive from nowhere. The intellectual groundwork was laid by people who never wrote a line of code. Sartre understood that meaning arises from negation. Gadamer understood that understanding is fusion. Heidegger understood that we are thrown into conditions we did not choose and must make sense of things from within them. Churchland and Sejnowski understood that intelligence is not located in single units but distributed across populations. Dennett understood that competence does not require comprehension.

Every one of these insights is now operationally relevant to anyone working with AI. The question is whether you recognize it.

The intellectual groundwork for AI was laid by people who never wrote a line of code. The question is whether you recognize it. - Drew Thomas Hendricks

At Nimbletoad, we recognized it a long time ago. It is the foundation of everything we build.

Looking for a strategic marketing partner?

Tell us about your organization and goals. We'll recommend the best next step.

Start Strategic Fit Assessment