PML in Practice: Real-World Examples
PML in Practice: Real-World Examples of Python Markup Language
In our previous article, we introduced Python Markup Language (PML), an extension to Python that provides first-class support for working with Large Language Models. Now, let’s explore how PML is being used in real-world applications, with practical examples that demonstrate its power and flexibility.
From Concept to Implementation
PML isn’t just a theoretical improvement to Python—it’s being actively used in production systems today. The examples we’ll examine showcase different aspects of PML, from simple directives to complex workflows with typed responses, reflection loops, and context management.
Example 1: Simple Goal-Oriented Planning with PML
Let’s start with a straightforward example that demonstrates how PML can be used to express goals and plans:
"""
I'm using my bank so like in a very wide way I'm using Revolut right now and what I do I have a card
that I use on my iPhone and I don't have a like high balance there I always have it like below 50 bucks
or whatever and then I can issue a new card for any new servers that I sign up so that I'm sure I'm
very sure that I'm controlling my flows. Is there any way to control that via API or maybe if it's
not I could use other bank not Revolut if it's or maybe just a service not a bank fintech service
that is possible to control your API. Do you know any possibility for such functionality to work?
Goal:
Control my bank flows via API
Plan:
???
"""
This simple example shows how PML can be used to structure natural language inputs into goals and plans. The LLM can then help fill in the missing details of the plan based on the described goal.
Example 2: Task Orchestration with PML
Moving to a slightly more complex example, we can see how PML facilitates task orchestration:
# umami-telegram-app
"""
Goal:
Get site metrics and send them to telegram every day.
Plan:
1. Use Umami API to get data
2. Use Telegram Bot API to send messages
- find the best telegram bot framework
3. Use some orchestration tool to run the app
- find the best orchestration tool to run the app on schedule
-> https://telegraf.js.org/
"""
:ask tg_bot_framework
:use search_tool
find the best telegram bot framework
:--
:ask
:do
use https://umami.is/docs/api to get data
use {tg_bot_framework} to send messages
:--
This example demonstrates:
- Goal and plan definition in natural language
- Use of PML directives (
:ask,:use,:do) to orchestrate execution - Capturing and referencing results between steps (
{tg_bot_framework})
When processed, the PML compiler converts these high-level directives into executable Python code. For instance, the search directive gets transformed into:
# Auto-generated from PML block
import os
import sys
from src.pml.directives import process_ask, process_do
result_0 = process_do('''
:use search_tool
find the best telegram bot framework
''')
print(result_0)
Example 3: Sophisticated Wine Expert System
For a more advanced example, let’s examine a wine expert system that showcases PML’s full potential:
@directive()
def doc_summarizer(text_content: str, wine_query: str, context: str) -> DocSummary:
prompt = f"""
[CONTEXT]
{context}
Summarize the doc below with respect to the query '{wine_query}':
{text_content}
Return a JSON with "summary" field only, e.g.
{{ "summary": "..."}}
"""
:var raw_llm_output
:call openai_model
api_key = os.getenv("OPENAI_API_KEY", "")
model_name = "gpt-4"
temperature = 0.1
input_value = prompt
:store_to raw_llm_output
:--
# In a real implementation, we would parse JSON from raw_llm_output
# For simplicity, we'll just return a mock
return DocSummary(summary=f"Summary of {wine_query} from the document")
This example introduces several advanced PML features:
- Custom Directives: Using the
@directive()decorator to define reusable PML components - Typed Responses: The function returns a structured
DocSummarytype - Context Management: Passing context between components
- API Integration: Seamlessly calling the OpenAI API with structured parameters
- Variable Storage: Using
:varand:store_tofor managing intermediate results
The Wine Expert PML System Architecture
This wine expert system represents a sophisticated application of PML, with components that work together in a modular fashion:
# From wine_copilot.pml
@dataclass
class DocSummary:
summary: str
@dataclass
class ReliabilityResult:
is_reliable: bool
reason: str
@dataclass
class ImageResult:
bottle_images: List[str] = field(default_factory=list)
winery_images: List[str] = field(default_factory=list)
vineyard_images: List[str] = field(default_factory=list)
producer_images: List[str] = field(default_factory=list)
The system uses typed data classes to enforce structured outputs from LLM interactions, ensuring type safety and predictability. The workflow includes:
- Initial search for wine information
- Document summarization with reliability checking
- Image search for bottle, winery, vineyards, and producer
- Interesting facts gathering
- Final answer composition with structured sections
The Reliability Checker: Reflection Loops in PML
One of the most interesting patterns in the wine expert system is the reliability checker, which demonstrates how PML can implement reflection loops:
@directive()
def reliability_checker(doc_summary: str, doc_title: str, context: str) -> ReliabilityResult:
loop_count = 0
is_reliable = False
reason = ""
while loop_count < 2:
prompt = f"""
[CONTEXT]
{context}
doc_summary: {doc_summary}
doc_title: {doc_title}
loop_count: {loop_count}
Evaluate if this information is reliable. Consider:
- Are there contradictions in the summary?
- Is the information missing key details?
- Does the title match the summary content?
Return JSON: {{ "is_reliable": bool, "reason": "..." }}
"""
:var raw_output
:call openai_model
api_key = os.getenv("OPENAI_API_KEY", "")
model_name = "gpt-4"
temperature = 0.1
input_value = prompt
:store_to raw_output
:--
# For demonstration, we'll mock different responses based on loop_count
if loop_count == 0:
# First pass - unsure
is_reliable = False
reason = "Initial check finds potential contradictions, performing second review"
loop_count += 1
continue
else:
# Second pass - more confident
is_reliable = True
reason = "After reflection, information appears consistent and reliable"
break
return ReliabilityResult(is_reliable=is_reliable, reason=reason)
This implementation shows how PML can encapsulate complex logic that includes:
- Multiple passes through an LLM
- Reflection based on previous results
- Conditional logic that controls the flow
- Structured output with typed results
Multi-Tool Integration: Combining Search and LLM Processing
Another powerful pattern is the integration of multiple tools within PML directives:
@directive()
def fetch_images(wine_name: str) -> ImageResult:
# Create search queries for different image types
bottle_query = f"{wine_name} wine bottle images"
winery_query = f"{wine_name} winery images"
# Search for bottle images
:var bottle_results
:call tavily_search
api_key = os.getenv("TAVILY_API_KEY", "")
query = bottle_query
include_images = True
search_depth = "advanced"
topic = "wine"
max_results = 5
:store_to bottle_results
:--
# Search for winery images
:var winery_results
:call tavily_search
api_key = os.getenv("TAVILY_API_KEY", "")
query = winery_query
include_images = True
search_depth = "advanced"
topic = "wine"
max_results = 5
:store_to winery_results
:--
# Process results with LLM to extract image URLs
prompt = f"""
Extract image URLs from these search results:
Bottle search results:
{bottle_results}
Winery search results:
{winery_results}
Return a JSON with these fields:
- bottle_images: list of bottle image URLs
- winery_images: list of winery image URLs
- vineyard_images: empty list for now
- producer_images: empty list for now
"""
:var image_extraction
:call openai_model
api_key = os.getenv("OPENAI_API_KEY", "")
model_name = "gpt-4"
temperature = 0.1
input_value = prompt
:store_to image_extraction
:--
# Return structured results
return ImageResult(
bottle_images=[f"{wine_name}_bottle1.jpg", f"{wine_name}_bottle2.jpg"],
winery_images=[f"{wine_name}_winery1.jpg"],
vineyard_images=[],
producer_images=[]
)
This demonstrates how PML can orchestrate a complex workflow that:
- Calls external search APIs (Tavily)
- Stores intermediate results
- Processes those results with an LLM
- Formats the output into a strongly-typed structure
Creating an End-to-End System with PML
The wine expert system brings all these components together into a cohesive application:
class AnswerFlow:
def __init__(self, root_dir=None):
if root_dir is None:
root_dir = os.path.dirname(os.path.abspath(__file__))
# Initialize the context manager
self.ctx_manager = ContextManager(root_dir)
self.global_context = self.ctx_manager.gather_context(__file__)
def run(self, user_wine_query: str) -> str:
# Search for wine information
tavily_search = TavilySearch({
"api_key": os.getenv("TAVILY_API_KEY", ""),
"query": user_wine_query,
"include_answer": True,
"search_depth": "advanced",
"topic": "wine",
"max_results": 5
})
# Process search results and perform document summarization
# [Implementation details...]
# Fetch images related to the wine
images = fetch_images(wine_name=user_wine_query)
# Find interesting facts about the wine
facts = find_interesting_facts(user_wine_query, self.global_context)
# Compose the final answer with all the gathered information
final_answer = self._compose_final(doc_summaries, images, facts)
return final_answer
This demonstrates how PML facilitates the creation of complex, multi-step flows that combine traditional Python code with LLM interactions in a clean, maintainable way.
Benefits Observed in Real-World PML Applications
From these examples, we can identify several key benefits of using PML:
- Readability: PML makes the intent of LLM interactions clear and explicit
- Modularity: Components can be defined once and reused across the application
- Type Safety: Structured outputs ensure predictable behavior
- Context Management: Explicit context handling improves LLM performance
- Tool Integration: Seamless combination of LLMs with other services and APIs
- Reflection Capabilities: Support for iterative improvement through multiple passes
Getting Started with Your Own PML Applications
If you’re inspired by these examples and want to try PML yourself, here’s a simple guide:
-
Set up your environment:
pip install knify # Includes PML support -
Create a simple PML file (e.g.,
simple_assistant.pml):import knify.pml #<prompt name="assistant"> #| model: gpt-4 #| temperature: 0.7 You are a helpful assistant. #</prompt> response = #<ask assistant> What are the three most important considerations when designing an LLM-powered application? #</ask> print(response) -
Run your PML file:
python -m knify.pml.run simple_assistant.pml
Conclusion: The Future of PML
These real-world examples demonstrate that PML is not just a theoretical improvement but a practical tool that’s already enhancing how developers work with LLMs. As the ecosystem grows, we expect to see:
- More Complex Workflows: End-to-end applications built primarily with PML
- Community-Driven Directives: Libraries of reusable PML components
- Integration with More Tools: Expanded support for diverse APIs and services
- Advanced IDE Support: Better syntax highlighting, code completion, and debugging
In our next article, we’ll explore how knify is using LLMs to revolutionize database management, making entity modeling and migration more intuitive than ever before.
Stay tuned for more insights into how knify is reshaping modern web development!