Skip to content

Prompting with SCE

The core innovation of SCE is simple: replace verbose natural language instructions with compact, unambiguous semantic symbols that both humans and LLMs understand instantly.

This isn't just about brevity โ€” it's about precision, token efficiency, and cognitive clarity in human-AI collaboration.


The problem with natural language prompts

Natural language is powerful but inefficient for structured communication:

โŒ Verbose:

This is a non-negotiable fact that must remain true throughout the analysis and cannot be contradicted by subsequent reasoning or interpretations.

28 tokens to express a single constraint.

โŒ Ambiguous:

We need to investigate this further before making a decision.

Does "investigate" mean analyze existing facts, conduct new interviews, or wait for external input?

โŒ Inconsistent:

TODO: Review the policy
Action required: Check compliance
Next step: Validate the timeline

Three different phrasings for the same semantic concept.


The SCE solution

Replace natural language overhead with precise semantic operators:

Example 1: Pinned facts

Before (28 tokens):

This is a non-negotiable fact that must remain true throughout the analysis
and cannot be contradicted by subsequent reasoning or interpretations.

After (2 tokens):

๐Ÿ“Œ Student was injured on 11/06/24 while on school grounds.

Result: 93% token reduction + guaranteed semantic precision.

The ๐Ÿ“Œ symbol carries the entire "non-negotiable constraint" meaning in a single Unicode character that both humans and LLMs recognize instantly.


Example 2: Task states

Before (varied phrasing, 15-20 tokens each):

Action required: Schedule follow-up meeting with counsel
TODO: Document all interim measures offered
Task completed: Provided final written determination to both parties
Pending: Awaiting response from the Title IX coordinator

After (consistent, 8-12 tokens each):

๐Ÿ“ Schedule follow-up meeting with counsel
โ˜ Document all interim measures offered
โœ… Provided final written determination to both parties
โณ Awaiting response from Title IX coordinator

Benefits:

  • Consistent visual hierarchy
  • Instant recognition of task state
  • 30-40% token reduction
  • Scannable at a glance

Example 3: Reasoning vs. facts

Natural language often blurs the line between established facts and interpretative reasoning:

Ambiguous:

The delay in providing records suggests potential non-compliance with FERPA timelines.

With SCE:

๐Ÿ“Œ Records were provided on 12/15/24 (35 days after request)
๐Ÿ“œ FERPA requires response within 45 days (34 C.F.R. ยง 99.10)
๐Ÿง  The delay suggests potential non-compliance with FERPA timelines

Now it's crystal clear:

  • ๐Ÿ“Œ = established fact
  • ๐Ÿ“œ = legal citation
  • ๐Ÿง  = interpretative reasoning

The LLM can distinguish between what's known and what's inferred.


Example 4: Privacy and access control

Before (verbose and buried in text):

WARNING: This section contains protected student information subject to
FERPA privacy protections and should not be stored in unsecured locations
or shared with unauthorized parties.

After (prominent and precise):

๐Ÿ” Do not store student medical information in this task list

Benefits:

  • Visual warning stands out immediately
  • 75% token reduction
  • Unambiguous semantic signal
  • Machine-parseable for access control systems

Token efficiency at scale

Consider a typical compliance workflow prompt:

Natural language version (187 tokens):

Based on the following non-negotiable facts that must remain true:
- The student was injured on November 6, 2024 while on school grounds
- A mandated report was filed with county services on November 8, 2024

Please perform the following required actions:
- Schedule a follow-up meeting with legal counsel
- Document all interim measures that were offered to the complainant
- Verify whether all witnesses identified by the complainant were interviewed

The following tasks are currently pending and awaiting completion:
- Awaiting response from the Title IX coordinator
- Investigation is still pending final review

Please also note the following compliance citations:
- Title IX Section 106.45 governs the grievance process
- 34 CFR Section 99.10 describes the right to inspect education records

SCE version (68 tokens):

๐Ÿ“Œ Student was injured on 11/06/24 while on school grounds
๐Ÿ“Œ Mandated report filed with county services on 11/08/24

๐Ÿ“ Schedule follow-up meeting with legal counsel
๐Ÿ“ Document all interim measures offered to the complainant
๐Ÿ” Verify whether all witnesses identified by the complainant were interviewed

โณ Awaiting response from the Title IX coordinator
โณ Investigation pending final review

โš–๏ธ Title IX ยง106.45 governs the grievance process
๐Ÿ“œ 34 C.F.R. ยง 99.10 describes the right to inspect education records

Result: 64% token reduction with increased semantic precision.


Clarity benefits

Beyond token efficiency, SCE improves cognitive load for both humans and AI:

Visual hierarchy

Emojis create instant visual structure:

๐Ÿ“Œ Timeline established (11/06/24)
โš–๏ธ Title IX ยง106.45 applies
๐Ÿ” Verify witness interviews
๐Ÿ“ Schedule legal review
โณ Awaiting coordinator response
โš ๏ธ No safety plan implemented

Your eye immediately distinguishes:

  • Facts from tasks
  • Requirements from warnings
  • Completed from pending

Reduced ambiguity

Natural language has multiple interpretations. SCE symbols have one canonical meaning:

Symbol Precise Meaning No Confusion With
๐Ÿ“ Actionable task requiring execution โ˜ (not started), โœ… (completed), โณ (pending dependency)
๐Ÿ” Analysis of existing information ๐Ÿ•ต๏ธ (active investigation), ๐Ÿง  (interpretative reasoning)
๐Ÿ“Œ Non-negotiable established fact ๐Ÿ“ (actionable item), ๐Ÿง  (reasoning/interpretation)

Consistency across conversations

When you use SCE symbols, your prompts maintain semantic consistency across:

  • Different sessions with the same LLM
  • Different LLMs using the same prompts
  • Human collaborators reviewing the same workflows
  • Automated tools parsing the same structures

๐Ÿ“ always means "actionable task" โ€” not "TODO" in one prompt, "Action required" in another, and "Next step" in a third.


Programmatic prompt building

For developers, SCE provides type-safe access to emoji symbols through the SemanticOntologyEmojiMap:

import { SemanticOntologyEmojiMap as sce } from "@semanticencoding/core";

// Build prompts with semantic precision
const prompt = `
${sce.structure.pinned} Student was injured on 11/06/24 while on school grounds
${sce.structure.pinned} Mandated report filed with county services on 11/08/24

${sce.tasks.action} Schedule follow-up meeting with legal counsel
${sce.tasks.action} Document all interim measures offered to the complainant
${sce.reasoning.analyze} Verify whether all witnesses were interviewed

${sce.state.pending} Awaiting response from the Title IX coordinator
${sce.state.warning} No safety plan was implemented despite ongoing contact

${sce.legalPolicy.law} Title IX ยง106.45 governs the grievance process
${sce.legalPolicy.citation} 34 C.F.R. ยง 99.10 describes the right to inspect
`;

Benefits of programmatic construction

Type safety:

// โœ… Compile-time validation
const task = sce.tasks.action; // "๐Ÿ“"

// โŒ Typos caught at build time
const oops = sce.tasks.acton; // Error: Property 'acton' does not exist

Autocomplete support:

// Your IDE suggests available categories and symbols
sce.tasks.        // โ†’ action, todo, softComplete, complete, repeat
sce.reasoning.    // โ†’ analyze, insight, investigate
sce.state.        // โ†’ pending, unclear, warning, prohibited

Refactoring confidence:

// Change all instances of a symbol across your codebase
// Replace sce.tasks.action with a custom symbol
// TypeScript ensures you find every usage

Template reusability:

function buildCompliancePrompt(
  facts: string[],
  tasks: string[],
  citations: string[]
) {
  return `
${facts.map((f) => `${sce.structure.pinned} ${f}`).join("\n")}

${tasks.map((t) => `${sce.tasks.action} ${t}`).join("\n")}

${citations.map((c) => `${sce.legalPolicy.citation} ${c}`).join("\n")}
  `.trim();
}

// Generate consistent prompts from structured data
const prompt = buildCompliancePrompt(
  ["Student injured 11/06/24", "Report filed 11/08/24"],
  ["Schedule legal review", "Document measures"],
  ["34 C.F.R. ยง 99.10", "Title IX ยง106.45"]
);

Real-world impact

Case study: Investigation workflow

Traditional approach:

  • 450 tokens per prompt
  • Inconsistent phrasing across sessions
  • LLM occasionally confuses facts with reasoning
  • Human reviewers must read entire text to find action items

SCE approach:

  • 180 tokens per prompt (60% reduction)
  • Consistent semantic markers across all sessions
  • Clear visual separation: facts, citations, tasks, analysis
  • Action items scannable in <5 seconds

Cumulative savings over 50 interactions:

  • 13,500 fewer tokens processed
  • $0.20-$0.40 cost reduction (depending on model)
  • 3-4 hours saved in human review time
  • Zero semantic ambiguity incidents

Advanced patterns

Conflict prevention

SCE symbols include built-in conflict rules:

// โŒ Semantic conflict
๐Ÿ“Œ Investigation complete  // pinned fact
๐Ÿ“ Complete investigation  // actionable task
// These conflict โ€” is it complete or actionable?

// โœ… Clear and consistent
๐Ÿ“Œ Investigation started on 11/06/24
โœ… Investigation completed on 11/20/24

Conditional logic

const buildPrompt = (hasEvidence: boolean) => `
${sce.structure.pinned} Complaint received on 11/06/24

${
  hasEvidence
    ? `${sce.reasoning.analyze} Evaluate the evidence`
    : `${sce.reasoning.investigate} Gather evidence from witnesses`
}

${sce.tasks.action} Prepare written determination
`;

Contextual restrictions

Some symbols are context-specific:

// โœ… Appropriate for LLM prompts
${sce.structure.pinned} Timeline established
${sce.reasoning.insight} Delay suggests non-compliance

// โš ๏ธ Reserved for human-only contexts (based on allowedContext)
// Check symbol.allowedContext before using in automation

Getting started with SCE prompting

1. Start simple

Replace common verbose phrases:

Replace With
"This is a required action:" ๐Ÿ“
"Non-negotiable fact:" ๐Ÿ“Œ
"Pending response from:" โณ
"Legal citation:" ๐Ÿ“œ
"This analysis suggests:" ๐Ÿง 

2. Build consistency

Use the same symbols for the same concepts across all your prompts.

3. Leverage the map

import { SemanticOntologyEmojiMap as sce } from "@semanticencoding/core";

Build prompts programmatically with type safety and autocomplete.

4. Measure impact

Track your token usage before and after SCE adoption. Most users see 40-65% reduction in prompt tokens while improving semantic precision.


Symbol quick reference

Symbol Meaning Use When
๐Ÿ“Œ Pinned fact Establishing non-negotiable constraints
๐Ÿ“ Action required Specifying tasks to execute
โœ… Complete Marking verified completion
โณ Pending Indicating awaiting dependency
๐Ÿ” Analyze Requesting fact analysis
๐Ÿง  Insight Sharing interpretative reasoning
โš–๏ธ Law Referencing legal frameworks
๐Ÿ“œ Citation Citing statutes/regulations
โš ๏ธ Warning Highlighting risks/concerns
โŒ Prohibited Marking non-compliant actions
๐Ÿ” Private Protecting sensitive data
๐Ÿ”“ Open Indicating public information

See the full Ontology API documentation for the complete symbol set.


Try it yourself

Before (traditional prompt):

Based on the following established facts that must not be contradicted:
The student was injured on school grounds on November 6, 2024.

Please analyze the following question:
Were all witnesses properly interviewed according to policy?

Required actions:
- Document the interview process
- Verify compliance with timelines

This task is currently pending legal review.

After (SCE prompt):

๐Ÿ“Œ Student injured on school grounds 11/06/24

๐Ÿ” Were all witnesses properly interviewed according to policy?

๐Ÿ“ Document the interview process
๐Ÿ“ Verify compliance with timelines

โณ Pending legal review

Same meaning. 62% fewer tokens. Instantly scannable.



Next steps

  1. Review the symbols โ€” Familiarize yourself with the ontology categories
  2. Try the CLI โ€” Extract SCE symbols from your existing prompts with sce explain "your text"
  3. Measure the impact โ€” Compare token counts before and after SCE adoption
  4. Build programmatically โ€” Use SemanticOntologyEmojiMap for type-safe prompt construction
  5. Share your results โ€” Join the community and share your token savings

Remember: Every verbose instruction you replace with a semantic symbol is a win for clarity, efficiency, and precision in human-AI collaboration.