- Published on
Most AI applications, whether a RAG based chatbot or a simple model wrapper, rely on prompts to generate responses. User or application inputs are converted into prompts, which are then fed to the underlying model to generate responses. Due to the nature of the models, the quality of response is highly dependent on the quality of the prompt. Of course, we can manually test prompts to some extent, but it's not scalable. In this article I will discuss Latitude - a prompt engineering platform that can help in refining prompts, A/B testing them and measuring their performance.