1. Select Dataset

Click on the dataset name you want to use to create prompts. If no dataset is showing in the dashboard, ensure you have followed the steps required to Add Dataset on the Future AGI platform.

2. Access Run Prompt Interface

You can view your dataset in a spreadsheet-like interface. On the top right corner, select Run Prompt option to create a prompt.

3. Configure Your Prompt

Basic Configuration

  1. Enter a descriptive name for your prompt
  2. Select your desired model from the dropdown

API Key Setup

After selecting the model, enter your API key in the popup window to access the selected model. In this example, we’re using gpt-4o.

Output Configuration

Choose the output type for your prompt:

  • string: For simple text responses (e.g., “correct”/“incorrect”)
  • object: For JSON-structured outputs

Writing Your Prompt

Access dataset columns using double curly braces. A dropdown menu will appear showing available columns. Selected column names will be automatically enclosed in braces.

4. Model Parameters

Configure these parameters to optimize your model’s performance:

ParameterDescriptionImpact
ConcurrencyNumber of simultaneous prompt processesHigher values increase speed but may hit API limits
TemperatureControls response randomness0: Deterministic
1: More creative but potentially less accurate
Top PControls token selection diversityLower: More focused
Higher: More varied responses
Max TokensMaximum response lengthHigher values allow longer responses but increase API usage
Presence PenaltyControls topic repetitionHigher: More diverse topics
Lower: More focused on single topic
Frequency PenaltyControls word/phrase repetitionHigher: Less repetition
Lower: Allows repetition

Response Format

  1. Choose between text or JSON output format
  2. Configure tool interaction:
    • required: Force tool usage
    • auto: Let model decide
    • none: Disable tool interaction

5. Execute Prompt

Click Save and Run to execute your prompt configuration. Results will appear in a new column named after your prompt.

The generated responses will be visible in the newly created column:

Best Practices

• Start with lower concurrency to test API limits
• Use temperature 0.0-0.3 for factual tasks
• Use temperature 0.7-1.0 for creative tasks
• Set reasonable max token limits to control costs
• Test prompts on a small subset before full execution