Main menu

Prompting LLMs to perform automatic translation in Loco

In addition to dedicated translation APIs, Loco also provides integrations with the latest generation of general purpose language models. It's not necessary to "chat" with these bots. Loco handles the conversation and sees to it that your text is translated just like any other translation API.

As we add more LLM providers we will harness their specific features, but as it stands we're seeing very good results simply by sending prompts to their "assistant" APIs. All you have to do is choose your model.

General purpose LLMs vs dedicated translation APIs

It should be noted that most dedicated translation APIs use LLMs under the hood (or longer-standing AI technology such as neural networks), so this is not a matter of "using AI" vs "not using AI". So what's the difference?

  • Speed:
    Dedicated translation APIs tend to be many times faster than general purpose LLMs. The models are smaller, and more specific to one single task. Loco's "suggest" feature is much snappier with quick responses, and you may find you don't need cutting edge AI to save yourself some typing.

  • Cost:
    Comparing costs is not simple. The latest, most powerful models may be more expensive than some traditional APIs, but the lighter models might actually be cheaper. Consider also what you get for free before hitting paid limits. Some of the older APIs have more generous free tiers than the latest shiny LLMs.

  • Reliability:
    Dedicated translation APIs are guaranteed to produce translations, and in the correct language. They may not be perfect, but they're unlikely to produce anything particularly random by mistake. We've seen very good results prompting "chat" assistants, but we recommend thorough testing of your chosen model before deploying any AI-generated text to production.

  • Context:
    One big advantage of general purpose LLMs is the extra information you can add into your prompt. Dedicated translation APIs vary in their ability to accept supporting context, but the latest generation of AI assistants excel at interpreting user inputs.

Choosing the right model

Currently we support OpenAI (which you may know as ChatGPT), and Google's Gemini series.

Once you've chosen your preferred vendor, Loco's bot configuration asks you to specify which model you want to use. This is a free text field, and the default option we provide will be the latest "lite" model.

Currently all our supported vendors provide a "Chat completions" or text generation API. Whichever model you choose, it must support that specific task, and also be compatible with "Structured outputs". See the individual vendor links below for more detail.

The latest, most powerful models for text generation will almost certainly work, but we recommend considering a lighter model in the same range, or even an older model if knowledge cut-off is a non-issue. Most vendors support a smaller, faster model for higher volume processing of less complex tasks. This is often a good fit for translation, and will likely save you money.

Providing context

Loco sends a basic prompt with instructions to translate your content into the desired language, but you can add more context to this if needed.

The custom prompt entered into your bot configuration will be appended to our basic prompt. You might want to add something general about your project, or the style of text you want, but there's no need to ask it to translate anything; we've already done that.

Loco also prompts for formal or informal language styles according to each project locale formality setting.

Avoid entering language specific prompts, unless you plan to use the bot for only that language.

Source text context

In addition to the prompt, Loco sends both the context and notes properties of each asset being translated. This is intended to provide meaning when sending single words or very short phrases.

For example, results for the text "Home" will be quite different (in some languages) if you enter "Where the heart is" vs "Button text for website navigation".

Formatted strings

The AI bots listed here don't use the same placeholder protection that we provide for dedicated translation APIs.

The models seem to understand most formatting syntax automatically, or at least know to leave it alone. If you run into problems, try adding something into your prompt to explain how the text is formatted.

Last updated by