Not An LLM

You do not need a better model.
You need better judgment.

I advise senior leaders navigating AI decisions in complex organisations. The problem is rarely model capability. The problem is unclear decision rights, weak constraints, and responsibility spread too thin.

Not An LLM is human-led, system-aware, and deliberately selective. Fewer clients, deeper conversations, clearer accountability.

What This Is Reacting Against

AI hype is usually a systems problem in disguise.

Most failed AI initiatives are not model failures. They are decision failures. Capability is abundant. Constraint awareness is not.

Automation decisions are made before the problem is defined.

Teams use AI output to avoid accountability for hard trade-offs.

Tooling expands, while decision clarity and ownership shrink.

Agent sprawl creates orchestration overhead without real leverage.

Leadership asks for speed, but nobody can explain what should not be automated.

Confidence rises because models sound smart, not because systems got safer.

What This Is For

Senior problems require senior thinking.

Not An LLM is for organisations that already have engineers, vendors, and platforms, but need clarity about where AI genuinely helps and where it creates new risk.

Decision Clarity

Who decides, on what basis, and with which consequences is made explicit before automation scales confusion.

System Understanding

Human incentives, process constraints, and technical architecture are treated as one system, not separate workstreams.

Senior Accountability

Leadership keeps responsibility for outcomes instead of delegating judgment to probability or vendor narratives.

Constraint Over Capability

More capability does not guarantee better outcomes. In misaligned systems, it often accelerates failure.

Speed without ownership increases damage. Automation without decision architecture scales ambiguity. Model sophistication cannot compensate for unclear accountability.

This is why the work starts with constraints, incentives, and decision rights. Then AI can be used intentionally, instead of theatrically.

Who This Is For And Not For

This is for

  • CTO, CIO, COO, CDO, or founder-level leaders carrying real decision responsibility.
  • Organisations with prior AI initiatives that created activity but not confidence.
  • Regulated or risk-sensitive environments where traceability and ownership matter.
  • Teams willing to examine incentives, not just prompts and tooling.

This is not for

  • Teams looking for prompt templates or off-the-shelf automation recipes.
  • Organisations seeking staff augmentation or delivery-heavy implementation support.
  • Contexts where nobody is willing to own the non-technical trade-offs.
  • Leaders who want certainty theater rather than honest constraint mapping.

Selectivity is intentional. The goal is fewer conversations, with better questions.

Engagement Model

No packaged transformation. Just deliberate advisory work.

This is advisory-first work shaped around decisions, not deliverable volume. The objective is better judgment under real constraints.

Conversations Usually Start Here

A focused working session on one live decision. We test assumptions, clarify decision rights, and identify the real system constraint.

What Usually Changes

Ownership becomes explicit, automation scope narrows to high-leverage paths, and leaders gain a clearer basis for trade-offs.

How The Work Continues

Short advisory cycles with direct senior involvement. No dependency-building, no permanent embedding, no transformation theater.

What I Will And Will Not Do

Will do

  • Critique and validate AI strategy before major commitments are made.
  • Clarify decision architecture, ownership, and escalation paths.
  • Map constraints across organisational, technical, and political layers.
  • Design human-in-the-loop controls proportionate to risk.
  • Analyse why previous AI or transformation efforts underperformed.

Will not do

  • Staff augmentation or long-term delivery retainers.
  • Prompt-engineering services sold as strategy.
  • Day-to-day engineering management.
  • Automation for automation's sake.
  • Generic digital transformation programmes.

Relationship To SSC

Not An LLM

The advisory edge. Opinionated, selective, and judgment-centric. Designed for senior conversations where accountability matters more than volume.

Sperring Software Consulting Ltd

The legal entity and broader capability base. If work needs wider delivery support, it can transition under SSC branding when that is the right fit.

Visit sperring.software

Rationale

Why this exists

I built Not An LLM because I kept seeing the same pattern: organisations moved quickly to adopt AI, but slowly on the harder work of decision clarity.

I am not anti-AI. I am anti-misplaced responsibility. Models can support judgment. They cannot carry accountability.

Over the last decade I have worked across product, architecture, and senior technology leadership in complex environments. The recurring failure mode is not missing capability. It is unclear ownership combined with incentives that reward movement over meaning.

Not An LLM is a deliberate constraint on what work I accept. That protects the quality of thinking and keeps the focus where it belongs: decisions that people must stand behind.

Common Questions

Do you implement AI systems directly?

Sometimes, but that is not the default here. Not An LLM is advisory-first. If an engagement requires broader delivery work, it should move under SSC.

Is this only for technical teams?

No. The work usually involves technology and business leadership together, because AI failure modes are often organisational before they are technical.

What does a good first conversation look like?

A specific high-stakes decision, current constraints, and who owns the consequences. We start there, not with tooling preferences.

Will you tell us not to automate?

Sometimes. The goal is not maximal automation. The goal is better outcomes with clearer accountability.

This is not for everyone. That is intentional.

If you are wrestling with AI decisions where responsibility, risk, and system constraints are all in play, we can have a direct conversation.

If this resonates, we can talk

Talk

Start with one real decision.

Share the decision you are currently wrestling with, why it is hard, and what is at stake. If there is a fit, I will reply personally.

No funnel sequence. No delegated sales process.