Welcome to PromptOptimizer!

Minimize LLM token complexity to save API costs and model computations.

PromptOptimizer is a Python library designed to minimize the token complexity of natural language understanding (NLU) systems, thereby reducing API costs and computational overhead. It offers a range of optimizers to achieve this optimization while maintaining the integrity of important sections of the prompt.

Disclaimer

There is a compression vs performance tradeoff – the increase in compression comes at the cost of loss in model performance. The tradeoff can be greatly mitigated by chosing the right optimize for a given task. There is no single optimizer for all cases. There is no Adam here.

Read more about this in Cost-Performance Tradeoff

Getting Started

How to get started using PromptOptimizer and minimize token complexity.
Compression metrics for sanity checks and logging.
PromptOptimizer CLI

Extending PromptOptimizer

You can create custom prompt optimizers

It is also easy to create custom metrics

Evaluations

There is no one prompt optimizer that works for all tasks. Through evaluations over a diverse set of tasks we can make the right choice of optimizer for a new task.

Extending Evaluations to include more tasks

Evaluating prompt optiimzers is same as evaluating LLMs before and after optimizations and measuring the differences. We thus provide OpenAI Evals Compatiblity to facilitate this.

Cost-Performance Tradeoff

The reduction in cost often comes with a loss in LLM performance. Almost every optimizer have hyperparameters that control this tradeoff.

Reference Documentations

Full documentation on all classes and methods for PromptOptimizer.

Indices and tables