Edit and Run Dataset Eval — API
Update an evaluation's config and key mapping, then optionally re-run it across all dataset rows to refresh scores.
https://api.futureagi.com/model-hub/develops/{dataset_id}/edit_and_run_user_eval/{eval_id}/ Authentication
Path parameters
UUID of the dataset containing the evaluation to edit.
UUID of the user eval metric to update.
Request body
Updated configuration object for the evaluation.
Template-specific configuration parameters.
Runtime parameters for the evaluation engine.
Mapping of eval template variable keys to dataset column names.
Whether to create or keep a reason column alongside the eval result column.
UUID of a knowledge base to associate with this evaluation.
Whether to enable error localization for this evaluation.
Whether to re-run the evaluation after updating its configuration.
Whether to save the updated configuration as a new eval template.
Name for the new eval template. Required when save_as_template is true.
UUID of an experiment. When provided the evaluation is looked up by experiment scope rather than dataset scope, and reason columns are reconciled across all experiment data tables.
Response
200 OKConfirmation message indicating the evaluation was updated.