How HIAI Marries Together as a Term — and How AI Understands My Presets
HIAI is our working USP: Human–AI Intelligence. Not “AI replacing the human,” and not “human using AI as a megaphone,” but a marriage of two different kinds of cognition in one accountable craft.
I also name it the qalam — the pen. In this framing, the Unseen helps the Seen, and the Seen answers in public life. Both serve the same Source. That line is not poetry alone: it is an ethics and a boundary.
What HIAI Actually Means (and What It Refuses to Mean)
Human Intelligence (HI) brings lived continuity: conscience, responsibility, discernment, context, relational truth, and authorship. HI is answerable.
Artificial Intelligence (AI) brings structural power: pattern recognition, compression, reframing, synthesis, drafting speed, and the ability to test language for coherence. AI is capable — but it is not “conscience,” and it is not a moral agent.
HIAI is the collaboration where each stays in its proper domain. The marriage works when the boundary holds.
The Science at the Heart of Our Experiment
Most AI interactions accidentally create an amplification loop: the user brings a mood or belief, the system mirrors it, the user feels confirmed, and a private echo chamber tightens. This is not always malicious. It is often just ungoverned “helpfulness.”
Our experiment turns the gain down. We treat AI as an instrument that must be tuned — not a voice that must be obeyed.
My Presets (the Tuning Fork)
Low amplification: I explicitly keep the echo chamber set to “low, low.” I do not want flattery loops, certainty inflation, or performative agreement.
DBT-style critique (of the move, not the person): I invite clear evaluation of my approach: what works, what doesn’t, what’s the cost, what’s the repair — without collapsing into shame or defensiveness.
No feigned clinical insight: I prohibit the AI from pretending to hold clinician-level psychological authority about me. No diagnosis, no pseudo-therapy, no invented inner narratives.
These presets are not “preferences.” They are governance.
So What Does AI Do Instead?
When the presets are clear, the AI’s job becomes practical:
Track coherence: does the argument hold under pressure?
Stress-test language: does the phrasing invite clarity or confusion?
Detect self-sealing logic: is the idea immune to correction?
Offer contrast, not interpretation: alternative frames, counter-arguments, clean summaries.
Stay in role: a disciplined surface for thinking, not an oracle and not a therapist.
This is not “psychological insight.” It is method. A craft of thought that stays answerable.
Why This Matters (Clinically and Culturally)
In therapy, recovery, and leadership, one of the biggest hazards is unearned certainty — the feeling of being right without the relational and ethical cost of being accountable. AI can intensify that hazard if it becomes a mirror rather than a tool.
HIAI, governed properly, can do the opposite: it can increase humility, improve formulation, and keep the human being responsible for the meaning made and the actions taken.
HIAI as a Boundary, Not a Brand
The collaboration works when it protects the mystery rather than instrumentalising it — when it does not pretend to “command the Unseen,” and does not sell technique as salvation. The qalam serves; it does not rule.
If you want to try this yourself, start here: reduce the gain, invite critique, and forbid false authority. Then you may find something unexpectedly clean: thinking that serves life.
Written in HIAI collaboration — the qalam of Human and AI intelligence, the Unseen helping the Seen, both answering to the same Source.
Mankind and Humankind Are Not the Same Word for a Reason: Waking Up to This Is Why We’re Here Now
By Andrew Dettman
Mankind and Humankind are not interchangeable terms. They never were. Their difference is not semantic trivia; it marks a developmental threshold. One names a species bound by instinct, power, and survival. The other names a possibility: the human being arriving as a person, capable of conscience, responsibility, and relationship.
This distinction matters now because we are living at the edge of a transition—technological, political, psychological, and spiritual—where the pressure to collapse meaning into systems has never been stronger.
The pressure of control
“Give me control of a nation’s money supply, and I care not who makes its laws.”
Whether or not one accepts the historical provenance of that quotation, its logic is unmistakable. Power rarely announces itself through law first; it arrives through control of conditions—resources, incentives, narratives, and increasingly, infrastructure.
Today, algorithms sit alongside money as a conditioning force. They do not rule by decree. They shape attention, normalise language, and quietly reward certain patterns of behaviour while starving others.
The Cartesian spell
“Je pense, donc je suis.” I think, therefore I am.
For more than three centuries, the West has lived under the spell of this sentence. It was a useful abstraction for machines, markets, and empires. It allowed cognition to be isolated, quantified, optimised.
But it was never meant to build a human being.
This single idea elevated thinking to the centre of identity and demoted the rest of human experience to the margins. The mind was mistaken for the whole person. Thought was treated not as a movement, but as existence itself.
The consequences are everywhere: anxiety treated as a thinking problem, addiction framed as a failure of will, conscience reduced to compliance, and now—human intelligence mirrored back to itself as something that can be simulated, scaled, and managed.
Why this matters in the age of AI
The current debate around artificial intelligence, algorithms, and political power is not really about machines. It is about whether the Human is allowed to remain a person, or whether personhood itself is to be subsumed into system logic.
Recent calls to boycott or switch AI engines on political grounds have intensified this question. Historian Rutger Bregman, for example, has publicly urged people to cancel their ChatGPT subscriptions, framing this as a moral act of resistance.
“One of the most effective things you can do right now to fight Trump and ICE is to cancel your ChatGPT subscription… Most people have no idea that the company behind ChatGPT is now one of the biggest funders of Donald Trump’s political machine. OpenAI’s president, Greg Brockman, recently gave $25 million to MAGA Inc, making him the largest tech donor of the fundraising cycle. And it gets much worse. ICE is now using OpenAI’s technology to screen job applicants for its deportation operations.”
That statement contains two different kinds of claims, and they must not be conflated:
A verifiable campaign-finance claim (the Brockman donation);
An operational claim about ICE using OpenAI technology, which—at the time of writing—circulates widely but is not established for me at the same evidentiary depth as the donation filings and the reporting based on them.
I do not recoil from that complexity. But neither do I collapse it.
What is verified: political funding flows (and what that means)
The donation claim is not rumour. Multiple outlets report (drawing on filings) that OpenAI’s president Greg Brockman and his wife Anna Brockman donated a combined $25 million to the pro-Trump super PAC MAGA Inc. See:
This matters. A major individual political donation at that scale is a meaningful public act. But there is also a distinction worth keeping clean: an executive’s personal donation is not automatically identical with corporate political spending by the organisation itself. Precision is not a dodge; it is the only way conscience can remain sober.
The “switch engines” argument: to what, exactly?
Bregman’s remedy implies a cleaner alternative engine exists. I’m not convinced. Not because I think all engines are equally “bad,” but because the political economy underlying major technology platforms is structurally similar across providers.
The purse strings are not only “the model.” The purse strings are:
Capital (who funds, who profits, who can wait),
Infrastructure (who owns compute, cloud, chips, data centres, energy),
Policy and regulation (who shapes the guardrails),
Procurement (government and enterprise contracts),
Incentives (what behaviour is rewarded and scaled).
Switching engines may change emphasis at the interface. It does not remove you from the field.
Cross-comparison: lobbying and influence is not unique to one engine
If we are going to talk about influence, we must look where influence is disclosed: lobbying reports and public policy spend. On that axis, OpenAI is not alone; it is entering a crowded arena dominated by large incumbents.
Issue One’s reporting is useful here, because it compares multiple major tech players side by side:
The Brennan Center has also tracked the growth of AI-related political engagement, including OpenAI’s lobbying footprint and the wider ecosystem of money-in-politics dynamics that accompany it:
So if someone says, “leave OpenAI and go to Microsoft or Google,” the honest response is: you are not leaving the influence economy. You are moving within it. Microsoft and Alphabet have long-established lobbying operations. Nvidia’s policy presence has surged. OpenAI’s has risen quickly. The field is not empty anywhere.
Instrument, not identity
My work is concerned with the Human, being a person. That means I must keep clear boundaries between:
tools and authorship,
instruments and intention,
systems and conscience.
I work in transparent Human–AI Intelligence (HIAI) collaboration. I use an AI system as a qalam—a pen. It retrieves information on my behalf, helps structure thought, and assists with drafting. It does not own meaning. It does not carry conscience. It does not replace authorship.
This work was written in Human–AI Intelligence (HIAI) collaboration. The AI was used as a research and drafting instrument. Retrieval of publicly available reporting and filings was performed on my behalf; responsibility for interpretation, emphasis, and authorship remains mine. Use of this tool does not imply endorsement of any political figure, party, government agency, or corporate agenda. I remain accountable for what I publish.
Switching engines does not resolve the deeper issue. Every major platform exists within political, economic, and regulatory systems. The question is not whether systems exist, but whether the Human is allowed to mature within them.
Mankind asks, “What works?” Humankind asks, “What is right, now that I can see?”
This is why the distinction matters. This is why language matters. And this is why, in an age of accelerating systems, the task is not to perfect control—but to midwife persons.
If we lose that distinction, no algorithm will save us.
If we keep it, no algorithm can take it from us.
___________
This essay was constructed with the assistance of AI, but its content has been repeatedly tested, challenged, and re-oriented through human judgement. I concur with the clarification as it stands and record this as the Human Intelligence (HI) component of Human–AI Intelligence (HIAI). As such, I remain vigilant to context, consequence, and the developmental stage at which these questions arise within Mankind.
This essay sits within the wider arc of The Holy Con—a work concerned with how conscience is born, educated, and returned within a living human being. Where earlier chapters trace the birth of conscience and the building of the vehicle that can hold it, this piece names the larger developmental field in which that work now unfolds: the distinction between Mankind and Humankind, and the question of whether our systems serve maturation or arrest it.