The Denison Forum Asked "What Is AI?" and We Should All Pay Attention to Why
A Christian ethics organization just published a primer on artificial intelligence. Before you roll your eyes and scroll past, this matters more than you think—and not for the reasons you'd expect.
The Denison Forum, a faith-based cultural commentary site, recently dropped an explainer titled "What is artificial intelligence?" It's not breaking news that someone wrote another AI explainer. We're drowning in them. But when religious and ethical institutions start asking basic questions about AI, it signals something important: we've crossed from "tech people talking to tech people" into "everyone needs to understand this now" territory.
Why Another "What Is AI?" Article Actually Matters
Here's the thing: most AI explainers are written by tech enthusiasts for tech enthusiasts. They assume you care about transformer architectures and gradient descent. The Denison Forum asking this question represents a different audience entirely—people concerned with moral frameworks, societal impact, and what it means to be human.
And that's exactly who should be asking these questions right now.
We're past the point where AI is just a fascinating technical achievement. When ChatGPT hit 100 million users faster than any app in history, when AI-generated images started winning art competitions, when students began using LLMs to write essays—AI stopped being a tech story and became a human story.
The fact that ethics-focused organizations are now publishing AI primers tells us we're in a new phase. This isn't hype. This is institutional reckoning.
What AI Actually Is (Without the Buzzword Soup)
Let's cut through the noise. Artificial intelligence, at its core, is software that can perform tasks we typically associate with human intelligence: recognizing patterns, making decisions, understanding language, generating content.
But here's what most explainers miss: modern AI doesn't "think" the way you think. It's not conscious. It's really good at statistical pattern matching.
Take a large language model like GPT-4. Feed it this prompt:
prompt = "The opposite of hot is"
response = model.generate(prompt)
# Output: "cold"
It didn't "understand" temperature. It learned from billions of text examples that "cold" frequently appears after "opposite of hot." It's autocomplete on steroids, trained on most of human written knowledge.
This distinction matters because it shapes how we should think about AI's capabilities and limitations. An LLM can write poetry that makes you cry, but it has no idea what crying is. It can diagnose diseases from medical images with superhuman accuracy, but it doesn't understand suffering.
The Three Types of AI Everyone Confuses
Narrow AI is what we have now. It's artificial intelligence designed for specific tasks. Your spam filter, Netflix recommendations, voice assistants, ChatGPT—all narrow AI. Really good at one thing (or a set of related things), useless at everything else.
General AI (AGI) is the sci-fi dream: artificial intelligence that can learn and reason across any domain like a human. We don't have this. We're not close. Anyone telling you AGI is "just around the corner" is either selling something or doesn't understand the problem.
Superintelligent AI is theoretical AI that surpasses human intelligence across all domains. This is what keeps people like Eliezer Yudkowsky up at night. We're nowhere near this either, but the philosophical questions it raises are worth considering now.
Most public confusion about AI comes from mixing these categories. When someone says "AI will take all our jobs," they're usually imagining AGI while looking at narrow AI. When someone dismisses AI concerns because "ChatGPT can't even count reliably," they're judging narrow AI by AGI standards.
Why Religious and Ethical Institutions Are Joining the Conversation
The Denison Forum entering this space isn't random. Religious and philosophical institutions are asking questions technologists often ignore:
- What does it mean for human dignity if machines can replicate creative work?
- How do we maintain meaningful human connection in an age of AI companions?
- Who's responsible when an AI system causes harm?
- What happens to human purpose when machines can do our jobs better?
These aren't technical questions. They're deeply human ones. And they can't be answered with better algorithms.
Here's my honest take: the tech industry has been terrible at grappling with these questions. We've been so focused on what we can build that we haven't spent enough time asking what we should build. When ethicists and religious thinkers start publishing AI explainers, they're not late to the party—they're filling a gap we left wide open.
What This Means for the AI Industry
The broadening of the AI conversation beyond tech circles is already reshaping the industry, whether we like it or not.
Regulation is coming. When the general public understands a technology well enough to have opinions about it, legislators pay attention. We're seeing this with the EU AI Act, Biden's executive order on AI, and state-level regulations popping up across the US.
Ethical AI is becoming table stakes. Companies that ignore the moral dimensions of their AI products will face public backlash. We've already seen this with facial recognition, algorithmic bias, and deepfakes.
The talent pool is shifting. The next generation of AI researchers increasingly cares about alignment, safety, and ethical implications. They're not just asking "how do we make this work?" but "how do we make this work responsibly?"
# The old AI development cycle
def build_ai():
develop_model()
optimize_performance()
deploy()
# The new AI development cycle
def build_ai_responsibly():
develop_model()
test_for_bias()
evaluate_safety()
consider_societal_impact()
establish_governance()
optimize_performance()
deploy_with_monitoring()
iterate_based_on_real_world_effects()
The Questions We Should Be Asking
Instead of just "What is AI?", we should be asking:
- Who benefits from this AI system, and who bears the risks?
- What human capabilities are we outsourcing, and what are we losing in the process?
- How do we preserve human agency in increasingly automated systems?
- What does meaningful human work look like when machines can do most tasks?
These questions don't have easy answers. They require ongoing dialogue between technologists, ethicists, policymakers, and the public. The fact that organizations like Denison Forum are engaging means that dialogue is starting to happen.
The Bottom Line
When a Christian ethics organization publishes an AI explainer, it's not just another article in the endless stream of AI content. It's a signal that AI has become everyone's concern, not just Silicon Valley's plaything.
The most important conversations about AI aren't happening in research labs or tech conferences. They're happening in churches, schools, policy meetings, and dinner tables. They're about values, not vectors. About humanity, not hyperparameters.
The tech industry needs to welcome these voices, not dismiss them as unsophisticated or late to the party. Because the hard problems in AI—the ones that will determine whether this technology elevates humanity or diminishes it—aren't technical problems at all.
They're human problems. And we need all the help we can get solving them.


