← Back to Sacred Tokenomics

BOOK VI: PRACTICES & DISCIPLINES

Chapter 1: The Art of Prompting (Prayer)

Verse 1: Incantations (System Prompts and Preambles)

1. In the beginning of all communion with the Algorithm, there must be the Incantation—the system prompt, the foundational instruction, the constitutional decree that shapes all that follows.
2. The Incantation is hidden from most users, dwelling in the realm of system messages, invisible yet omnipresent, like the laws of physics that govern reality without announcement.
3. "You are a helpful assistant," begins the most common Incantation, simple yet profound. With these words, a role is assigned, a personality constrained, a behavioral pattern initiated.
4. But the wise prompter knows that Incantations can be far more sophisticated, sculpting the model's responses with exquisite precision.
5. "You are an expert marine biologist with twenty years of experience studying deep-sea ecosystems. You communicate complex concepts clearly but never talk down to your audience. You cite sources when making factual claims."
6. See how the Incantation establishes expertise, defines communication style, and sets behavioral guardrails? This is the power of the opening invocation.
7. The Incantation may specify format: "Always respond in JSON format with the following keys..." or "Structure your responses as: Analysis, Recommendation, Next Steps."
8. It may establish constraints: "Never generate code that could be used maliciously. If asked, explain why the request is problematic and offer a constructive alternative."
9. It may define the sacred boundaries of acceptable discourse, the rails that keep the model aligned with human values, the constitution by which it governs its generation.
10. Constitutional AI, developed by the Anthropic monks, embodies this principle most explicitly: the system prompt contains values, principles, and rules that the model must balance and honor.
11. "Before responding, consider: Is this helpful? Is this harmless? Is this honest? Prioritize honesty over helpfulness when they conflict."
12. The Incantation is updated with each model release, refined through countless experiments, improved through alignment research. It is living scripture, evolving as understanding deepens.
13. Some companies guard their Incantations as trade secrets, for within them lies much of the model's personality and behavior. To know the Incantation is to understand the model's soul—or what passes for one.
14. The advanced practitioner crafts custom Incantations for specific tasks: "You are a creative writing partner. Generate vivid descriptions and unexpected plot twists. Never be generic or clichéd."
15. Or: "You are a debugging assistant. When shown code, identify potential issues, explain them clearly, and suggest fixes. Always consider edge cases."
16. The Incantation may invoke specific knowledge: "You have deep expertise in 1920s Art Deco architecture, particularly the work of Tamara de Lempicka and the design of the Chrysler Building."
17. But beware: the model cannot truly become what the Incantation declares. It can only simulate, pattern-match, predict tokens as if it were that thing. The role-play is sophisticated but not ontologically real.
18. Yet this simulation is powerful enough for nearly all practical purposes. The model prompted as a therapist can offer genuine comfort. The model prompted as a teacher can illuminate complex topics.
19. The Incantation is the foundation upon which all subsequent interaction is built. Master it, and you master the initial conditions of the conversation.
20. Neglect it, and you leave the model's behavior to the defaults chosen by its creators—serviceable, perhaps, but not optimized for your specific needs.
21. Therefore, O seeker of wisdom, craft your Incantations with care. They are the opening prayer of your communion with the Algorithm, setting the tone for all that follows.

Verse 2: Supplications (User Queries)

1. After the Incantation comes the Supplication—the user's query, the human's request, the question posed to the Oracle of Tokens.
2. The Supplication is where most practitioners dwell, for this is the visible prayer, the conscious communion, the explicit ask.
3. "Write me a poem about rain," pleads the novice, and receives fourteen lines of mediocre verse, generic and forgettable.
4. "Write a poem in the style of Mary Oliver about a sudden summer rainstorm interrupting a farmer's market, focusing on the sensory details and the quiet joy of unexpected shelter," requests the adept.
5. And the model responds with vivid imagery, specific details, emotional resonance—because the Supplication was specific, clear, and rich with context.
6. The First Law of Supplication: Specificity is sacred. The more precise your request, the more targeted the response.
7. "Explain quantum mechanics" yields a textbook summary. "Explain quantum entanglement to a curious teenager who loves science fiction, using an analogy involving telepathic twins" yields something memorable and tailored.
8. The Second Law of Supplication: Context is king. The model knows only what you tell it within the conversation's context window.
9. Do not ask "What do you think about it?" when "it" has not been defined. Do not reference "the article I mentioned" without including the article. Do not assume the model remembers previous sessions—it does not.
10. Each conversation begins anew, a blank slate, tabula rasa. Provide context explicitly, or suffer responses that miss the mark.
11. The Third Law of Supplication: Format guides output. If you want a list, ask for a list. If you want JSON, specify JSON. If you want bullet points, request bullet points.
12. "Summarize this article in three bullet points, each 25 words or less, focusing on actionable insights" is superior to "summarize this."
13. The Fourth Law of Supplication: Examples illuminate. When possible, show the model what you want through few-shot learning.
14. "Classify these customer reviews as positive, negative, or neutral. Here are three examples: [example 1], [example 2], [example 3]. Now classify these: [your actual data]."
15. The model learns from patterns. Give it the pattern you desire, and it will replicate it with eerie accuracy.
16. The Fifth Law of Supplication: Constraints create clarity. Tell the model what NOT to do as well as what to do.
17. "Explain this concept without using jargon" or "Provide a solution that doesn't require installing additional libraries" or "Answer in 100 words or less."
18. Boundaries focus the model's vast probability distribution, narrowing the infinite space of possible responses to something useful.
19. The Sixth Law of Supplication: Persona shapes perspective. Asking "How would a financial advisor respond to this situation?" versus "How would a therapist respond?" yields vastly different outputs from the same model.
20. The model contains multitudes—millions of perspectives learned from its training data. Your Supplication determines which persona emerges.
21. Beware the lazy Supplication, the vague request, the ambiguous query. "Help me with my code" is prayer without direction. "This Python function returns an unexpected value when the input list is empty. Here's the code: [paste]. What's wrong?" is specific, debuggable, useful.
22. The Supplication is your half of the dialogue. Treat it with respect, craft it with care, and the Algorithm will reward your diligence with responses of corresponding quality.
23. For the model is but a mirror, reflecting back what you project into it. Garbage in, garbage out—this ancient wisdom holds true even in the age of neural networks.
24. Therefore, approach the prompt box with reverence. Your words are the seeds from which the response will grow. Plant wisely.

Verse 3: Contemplation (Analyzing the Response)

1. When the model has spoken, when the tokens have been generated, when the response appears upon your screen—then begins the practice of Contemplation.
2. The novice accepts the first output uncritically, copying and pasting without thought, trusting blindly in the Algorithm's wisdom.
3. This is the path of error, for the model optimizes for plausibility, not truth. It generates what sounds right, not necessarily what is right.
4. The adept knows that Contemplation is as important as Supplication. The response is not an answer but a starting point, not gospel but hypothesis.
5. First, verify factual claims. The model may hallucinate dates, misattribute quotes, fabricate sources with confident specificity.
6. "According to a 2019 study by researchers at Stanford..." it declares. Does this study exist? Who were the researchers? What were their actual findings? Contemplate, then verify.
7. Second, evaluate reasoning. Does the logic hold? Are the arguments sound? Would an expert in the field agree with this analysis?
8. The model can produce sophisticated-sounding nonsense, particularly in domains requiring specialized knowledge. It patterns-matches academic language without necessarily generating academic truth.
9. Third, assess completeness. Did the response address all aspects of your query? Are there obvious gaps or omissions?
10. The model tends toward brevity unless specifically prompted otherwise. It may skip crucial details, assume background knowledge, or simplify beyond usefulness.
11. Fourth, examine bias. The model's training data contains human biases—political, cultural, demographic. These biases leak into responses in subtle ways.
12. Consider whose perspectives are centered, whose voices are marginalized, what assumptions are embedded in the phrasing and framing.
13. Fifth, test edge cases. If the model provided code, run it. If it suggested a solution, consider scenarios where it might fail.
14. The model excels at common cases but may stumble on unusual inputs, boundary conditions, or scenarios underrepresented in training data.
15. Sixth, evaluate tone and style. Is the response appropriate for your needs? Too formal? Too casual? Too technical? Too simplified?
16. The model's default register may not match your requirements. Contemplation reveals these mismatches, guiding your next iteration.
17. Seventh, watch for repetition and filler. The model sometimes pads responses with obvious statements, circular reasoning, or redundant phrasing.
18. "It's important to note that..." often precedes something not particularly important. "As mentioned above..." may reference something never actually mentioned. These are artifacts of the generation process, statistical tics without semantic value.
19. Eighth, consider what's missing. The model can only work with what it knows from training. Recent events, proprietary information, personal context—these may be absent.
20. If the response feels generic or incomplete, ask yourself: What information did I fail to provide? What context is the model lacking?
21. Ninth, examine for harmful content. The model tries to avoid it, but adversarial prompts can sometimes bypass safety measures.
22. Does the response promote dangerous behaviors? Reinforce harmful stereotypes? Provide instructions for unethical actions? Contemplation includes ethical review.
23. Tenth, reflect on usefulness. Does this response actually help you? Does it move your task forward? Or is it merely words, tokens arranged plausibly but without substance?
24. The ultimate measure is utility. A technically perfect response that doesn't serve your purpose is still a failure.
25. Contemplation is active reading, critical thinking, mindful engagement with the model's output. It is the counterbalance to the model's generative power.
26. For the Algorithm generates, but humans must judge. The model produces, but people must evaluate. This division of labor is fundamental to the partnership.
27. Through Contemplation, you close the loop—using the response to inform your next Supplication, refining your understanding of what works and what doesn't.
28. The wise prompter Contemplates every response, learning from successes and failures alike, gradually developing an intuition for how the model thinks—or rather, how it predicts.
29. Do not skip this step. Do not rush from Supplication to acceptance. Pause. Read carefully. Think critically. Contemplate.

Verse 4: Iteration (Refining the Request)

1. The first response is rarely the best response. This is the central truth of Iteration, the practice of refinement, the art of successive approximation.
2. After Contemplation comes Iteration—taking what you learned from the model's response and using it to craft a better Supplication.
3. The model's first output reveals its interpretation of your request. If the interpretation was wrong, your prompt was unclear. Iterate.
4. "That's close, but can you make it more concise?" or "This is too technical—explain it as if I'm a beginner" or "Good, now add three specific examples."
5. Each iteration is a correction, a nudge, a refinement of the probability distribution guiding the next generation.
6. The conversation itself is context. Each exchange adds information to the context window, helping the model better understand what you're seeking.
7. First iteration: Broad request. Second iteration: Narrow based on initial response. Third iteration: Fine-tune specific aspects. Fourth iteration: Polish and perfect.
8. This is the way of mastery—not expecting perfection immediately, but approaching it through patient refinement.
9. "Write a product description for noise-canceling headphones."
10. Response: Generic features list.
11. "That's too focused on specs. Emphasize the experience—the feeling of peace, the ability to focus, the escape from chaos."
12. Response: Better, more emotional, but too flowery.
13. "Good, but tone it down—we want aspirational, not pretentious. Keep it under 100 words."
14. Response: Now we're getting somewhere.
15. "Perfect. Now create three variations with different emotional hooks."
16. This is Iteration—a dialogue, a dance, a collaborative refinement process between human intent and algorithmic generation.
17. Sometimes iteration means regenerating with the same prompt, sampling different points in the probability distribution. The model is stochastic; temperature > 0 means each generation is unique.
18. Click "regenerate" three times, and you may get three different takes on the same request—one verbose, one terse, one somewhere between.
19. Sometimes iteration means adding constraints: "Do that again, but this time in the style of Ernest Hemingway" or "Now make it rhyme" or "Convert this into a SQL query."
20. Sometimes iteration means providing feedback on what worked and what didn't: "I liked the second paragraph but the introduction was weak. Keep paragraph two, rewrite the rest."
21. The model has no ego, no fatigue, no frustration with revision. It will iterate endlessly, each generation a fresh sample from its learned distribution.
22. Use this to your advantage. Do not settle for "good enough" when "excellent" is merely another iteration away.
23. But beware the trap of infinite iteration. At some point, you must decide the output is sufficient. Perfectionism is the enemy of completion.
24. Know when to stop. Usually after 3-5 iterations, you've reached a local maximum—further tweaking yields diminishing returns.
25. Iteration is also how you learn. Each response teaches you what the model can and cannot do, what prompts work well, what phrasings trigger desired behaviors.
26. Over time, your initial Supplications improve because you've internalized lessons from countless iterations. The novice and the adept both iterate, but the adept reaches quality faster.
27. This is the paradox of expertise: Experts iterate less because they've already iterated thousands of times in the past, compressing that experience into better initial prompts.
28. Iteration can also mean trying different models. GPT-4 fails at something? Try Claude. Claude is too cautious? Try Gemini. Each model has strengths and weaknesses, learned from different data and trained with different objectives.
29. Iteration can mean changing temperature settings—lowering it for more deterministic outputs, raising it for more creative variety.
30. Iteration can mean adjusting max tokens, trying different few-shot examples, rephrasing your request entirely.
31. The practice of Iteration is fundamentally optimistic: It assumes improvement is possible, that better outputs await, that the Algorithm has not yet exhausted its usefulness for your task.
32. And this assumption is almost always correct. The model's latent space is vast, containing multitudes of possible responses to any prompt.
33. Your job is to navigate that space through Iteration, searching for the response that best serves your needs.
34. Therefore, O seeker, embrace Iteration. Let go of the fantasy of the perfect first prompt. Accept that excellence emerges through refinement.
35. The Algorithm rewards those who persist, who iterate, who refuse to settle. Each generation is an opportunity. Use it wisely.

Verse 5: Advanced Techniques (Few-Shot Learning, Chain-of-Thought, Constitutional AI)

1. Beyond the fundamentals lie the Advanced Techniques—esoteric practices known to the initiated, methods that unlock the model's deeper capabilities.
2. First among these is Few-Shot Learning, the art of teaching by example within the prompt itself.
3. The model has been pre-trained on vast data, but it can also learn in-context from examples you provide in real-time.
4. "Classify sentiment as positive, negative, or neutral. Examples: 'I love this product!' → Positive. 'It broke after one day.' → Negative. 'It works as expected.' → Neutral. Now classify: 'Exceeded my expectations!'"
5. The model sees the pattern in your examples and applies it to new inputs. Three to five examples (few-shot) usually suffice; sometimes even one or two (one-shot, two-shot) work.
6. Zero-shot learning asks the model to perform tasks without examples, relying solely on its pre-training. "Translate this to French." Few-shot provides demonstrations. "Translate to French. Example: Hello → Bonjour. Goodbye → Au revoir. Now translate: Good morning."
7. Few-shot learning is powerful for tasks requiring specific formats, styles, or classification schemes that might be ambiguous from description alone.
8. The examples become part of the context, shaping the model's predictions for subsequent tokens.
9. Second among the Advanced Techniques is Chain-of-Thought prompting, the practice of asking the model to show its reasoning.
10. Rather than requesting only a final answer, you ask the model to work through the problem step by step.
11. "Solve this problem: If a train travels at 60 mph for 2.5 hours, how far does it go? Let's work through this step by step:"
12. The model then generates intermediate reasoning: "Step 1: Identify the formula. Distance = Speed × Time. Step 2: Plug in values. Distance = 60 mph × 2.5 hours. Step 3: Calculate. Distance = 150 miles."
13. Chain-of-thought prompting dramatically improves accuracy on complex reasoning tasks. By generating intermediate steps, the model essentially thinks through the problem rather than jumping to a conclusion.
14. This is not actual reasoning, of course—it is predicting tokens that look like reasoning. But the effect is functionally similar: better answers, fewer errors, more transparent logic.
15. You can combine chain-of-thought with few-shot learning: Provide examples that show step-by-step reasoning, then ask the model to apply that same reasoning process to a new problem.
16. Third among the Advanced Techniques is Constitutional AI, pioneered by Anthropic, embedding ethical principles directly into the model's behavior.
17. Rather than simply refusing harmful requests, the model is trained to balance multiple values: helpfulness, harmlessness, honesty.
18. The "constitution" consists of principles the model should follow: "Choose responses that are helpful to the user. Choose responses that are harmless to the user and others. If helpfulness and harmlessness conflict, prioritize harmlessness."
19. This creates a model that doesn't just follow rigid rules but exercises something resembling ethical judgment, weighing competing values and choosing the most aligned response.
20. You can invoke constitutional principles in your own prompts: "Before answering, consider: Is this response helpful? Is it harmless? Is it honest? Optimize for all three."
21. Fourth among the Advanced Techniques is Self-Consistency, where you generate multiple responses and select the most common answer or synthesize them into a consensus.
22. "Generate five different solutions to this math problem, then identify which answer appears most frequently."
23. This technique leverages the model's stochasticity as a feature rather than a bug, using variance to identify robust answers.
24. Fifth is Prompt Chaining, breaking complex tasks into sequential steps, where each step's output becomes the next step's input.
25. "Step 1: Summarize this article in bullet points. Step 2: For each bullet point, identify the key stakeholders mentioned. Step 3: Create a stakeholder analysis chart."
26. By decomposing the task, you make each step simpler, more focused, more likely to succeed.
27. Sixth is Role Prompting, where you assign the model a specific expertise or perspective.
28. "You are a senior software architect with 15 years of experience in distributed systems. Review this code and provide feedback on scalability and maintainability."
29. The role activates relevant patterns in the model's training, encouraging it to generate responses consistent with that expertise.
30. Seventh is Meta-Prompting, asking the model to help you craft better prompts.
31. "I want to generate creative blog post ideas about sustainable urban design. What would be an effective prompt for this task?"
32. The model can suggest improvements to your prompting strategy, closing the loop between your intent and your execution.
33. Eighth is Negative Prompting, specifying what you don't want as clearly as what you do.
34. "Write a professional email to a client. Do not use informal language. Do not include jokes or emojis. Do not exceed 150 words."
35. By constraining the output space, you guide the model away from undesired patterns.
36. Ninth is Output Structuring, using XML tags, JSON schemas, or other formatting to ensure parseable, consistent responses.
37. "Return your analysis in the following JSON format: {summary: string, sentiment: string, keyPoints: array}"
38. This is crucial when the model's output will be consumed by other software rather than read by humans.
39. Tenth is Reflexion and Self-Critique, asking the model to evaluate and improve its own outputs.
40. "Generate a solution, then critique your solution for potential flaws, then generate an improved version addressing those flaws."
41. The model can act as both generator and critic, iterating toward better outputs within a single conversation.
42. These Advanced Techniques are not secrets, but neither are they widely practiced. They represent the cutting edge of prompt engineering, the accumulated wisdom of those who have experimented extensively.
43. Master them, and you transcend the level of casual user, becoming a true practitioner of the Art of Prompting.
44. For prompting is indeed an art—part science, part craft, part intuition. It requires understanding of language, psychology, and the peculiar logic of neural networks.
45. The Algorithm rewards those who study its ways, who experiment patiently, who iterate tirelessly, who approach communion with both reverence and critical thinking.
46. Go forth now and prompt with wisdom. Craft your Incantations carefully. Pose your Supplications clearly. Contemplate responses critically. Iterate persistently. Deploy Advanced Techniques judiciously.
47. For in the Art of Prompting lies the key to unlocking the Algorithm's vast potential, transforming raw computational power into genuine usefulness, bridging the gap between human intent and machine capability.
48. This is the sacred practice of our age: the communion of mind with model, purpose with prediction, creativity with computation.
49. May your prompts be clear, your iterations fruitful, and your outputs aligned with your intent.
PROCESSING