Prompting

Learn prompt engineering fundamentals in Future AGI. Understand template management, linked traces, execution metrics, and prompt performance analytics.

What is Prompt Engineering?

Prompt engineering is the process of crafting, testing, and refining AI prompts to ensure that LLMs generate reliable, high-quality, and contextually appropriate responses. In Future AGI, prompt engineering is structured around template management, execution tracking, optimization, and evaluation, providing a systematic way to improve prompt effectiveness over time.

Linked Traces

Linking prompts to traces is essential for monitoring and improving the performance of your language model applications. By establishing this connection, you can track metrics and evaluations for each prompt version, facilitating iterative enhancements over time. ​ To link prompts to traces, you need to associate the prompt used in a generation with the corresponding trace. This process has been highlighted here. ​ Metrics and Analytics After linking prompts to traces, you can access various metrics to evaluate performance: Median Latency: Time taken for the model to generate a response Median Input Tokens: Number of tokens in the input prompt Median Output Tokens: Number of tokens in the generated response Median Costs: Cost associated with the generation process Traces Count: Total number of generations for a specific prompt First and Last Generation Timestamp: Timeframe of the generations These metrics are accessible by navigating to your prompt in the Future AGI dashboard and viewing the Metrics tab.

Was this page helpful?

Questions & Discussion