Not An LLM
I advise senior leaders navigating AI decisions in complex organisations. The problem is rarely model capability. The problem is unclear decision rights, weak constraints, and responsibility spread too thin.
Not An LLM is human-led, system-aware, and deliberately selective. Fewer clients, deeper conversations, clearer accountability.
What This Is Reacting Against
Most failed AI initiatives are not model failures. They are decision failures. Capability is abundant. Constraint awareness is not.
Automation decisions are made before the problem is defined.
Teams use AI output to avoid accountability for hard trade-offs.
Tooling expands, while decision clarity and ownership shrink.
Agent sprawl creates orchestration overhead without real leverage.
Leadership asks for speed, but nobody can explain what should not be automated.
Confidence rises because models sound smart, not because systems got safer.
What This Is For
Not An LLM is for organisations that already have engineers, vendors, and platforms, but need clarity about where AI genuinely helps and where it creates new risk.
Who decides, on what basis, and with which consequences is made explicit before automation scales confusion.
Human incentives, process constraints, and technical architecture are treated as one system, not separate workstreams.
Leadership keeps responsibility for outcomes instead of delegating judgment to probability or vendor narratives.
Constraint Over Capability
More capability does not guarantee better outcomes. In misaligned systems, it often accelerates failure.
Speed without ownership increases damage. Automation without decision architecture scales ambiguity. Model sophistication cannot compensate for unclear accountability.
This is why the work starts with constraints, incentives, and decision rights. Then AI can be used intentionally, instead of theatrically.
Who This Is For And Not For
Selectivity is intentional. The goal is fewer conversations, with better questions.
Engagement Model
This is advisory-first work shaped around decisions, not deliverable volume. The objective is better judgment under real constraints.
A focused working session on one live decision. We test assumptions, clarify decision rights, and identify the real system constraint.
Ownership becomes explicit, automation scope narrows to high-leverage paths, and leaders gain a clearer basis for trade-offs.
Short advisory cycles with direct senior involvement. No dependency-building, no permanent embedding, no transformation theater.
What I Will And Will Not Do
Relationship To SSC
The advisory edge. Opinionated, selective, and judgment-centric. Designed for senior conversations where accountability matters more than volume.
The legal entity and broader capability base. If work needs wider delivery support, it can transition under SSC branding when that is the right fit.
Visit sperring.softwareRationale
I built Not An LLM because I kept seeing the same pattern: organisations moved quickly to adopt AI, but slowly on the harder work of decision clarity.
I am not anti-AI. I am anti-misplaced responsibility. Models can support judgment. They cannot carry accountability.
Over the last decade I have worked across product, architecture, and senior technology leadership in complex environments. The recurring failure mode is not missing capability. It is unclear ownership combined with incentives that reward movement over meaning.
Not An LLM is a deliberate constraint on what work I accept. That protects the quality of thinking and keeps the focus where it belongs: decisions that people must stand behind.
Common Questions
Sometimes, but that is not the default here. Not An LLM is advisory-first. If an engagement requires broader delivery work, it should move under SSC.
No. The work usually involves technology and business leadership together, because AI failure modes are often organisational before they are technical.
A specific high-stakes decision, current constraints, and who owns the consequences. We start there, not with tooling preferences.
Sometimes. The goal is not maximal automation. The goal is better outcomes with clearer accountability.
If you are wrestling with AI decisions where responsibility, risk, and system constraints are all in play, we can have a direct conversation.
If this resonates, we can talkTalk
Share the decision you are currently wrestling with, why it is hard, and what is at stake. If there is a fit, I will reply personally.
No funnel sequence. No delegated sales process.