Main menu

Prompting LLMs to perform automatic translation in Loco

In addition to dedicated translation APIs, Loco also provides integrations with the latest generation of general purpose language models. It's not necessary to "chat" with these bots. Loco handles the conversation and sees to it that your text is translated just like any other translation API.

As we add more LLM providers we will harness their specific features, but as it stands we're seeing very good results simply by sending prompts to their "assistant" APIs. All you have to do is choose your model.

General purpose LLMs vs dedicated translation APIs

It should be noted that most dedicated translation APIs use LLMs under the hood (or longer-standing AI technology such as neural networks), so this is not a matter of "using AI" vs "not using AI". So what's the difference?

  • Speed:
    Dedicated translation APIs tend to be many times faster than general purpose LLMs. The models are smaller, and more specific to one single task. Loco's "suggest" feature is much snappier with quick responses, and you may find you don't need cutting edge AI to save yourself some typing.

  • Cost:
    Comparing costs is not simple. The latest, most powerful models may be more expensive than some traditional APIs, but the lighter models might actually be cheaper. Consider also what you get for free before hitting paid limits. Some of the older APIs have more generous free tiers than the latest shiny LLMs.

  • Reliability:
    Dedicated translation APIs are guaranteed to produce translations, and in the correct language. They may not be perfect, but they're unlikely to produce anything particularly random by mistake. We've seen very good results prompting "chat" assistants, but we recommend thorough testing of your chosen model before deploying any AI-generated text to production.

  • Context:
    One big advantage of general purpose LLMs is the extra information you can add into your prompt. Dedicated translation APIs vary in their ability to accept supporting context, but the latest generation of AI assistants excel at interpreting user inputs.

Choosing a model

Currently we support LLMs from OpenAI (aka ChatGPT), Google's Gemini and Anthropic's Claude.

Once you've chosen your preferred vendor, Loco's bot configuration asks you to specify which model you want to use. This is a free text field, and the default option we provide will be the latest "lite" model. If you want your bot to stay on that model, be sure to complete the text field. Leaving it blank means it will track our default, and this may change when a new models are released by the vendor.

All our supported vendors provide some variation of a "Chat completions" or "Text generation" API. Whichever model you choose, it must support that specific task, and also be capable of responding with a JSON schema. See the individual vendor links above for more detail.

The latest "text-to-text" models will almost certainly work, but we recommend considering a lighter model in the same range, or even an older model if knowledge cut-off is a non-issue. Most vendors support a smaller, faster model for higher volume processing of less complex tasks. This is often a good fit for translation, and will likely save you money.

Which models are the best for translation?

You tell us! We don't promote any specific vendors, or models. We've found some to be slower than others, but it may be that they're just "thinking" more. Complex reasoning may not be necessary for your translations, so find your own balance.

Prompting

The prompts Loco sends to LLMs serve two main purposes: giving instructions and providing context. Here we explain the anatomy of our prompts, so you can customize them more effectively. Every prompt is pieced together as follows:

(1) Base instructions + (2) Locale instructions + (3) General instructions + (4) Project context.

(1) Base instructions

Loco sends a base prompt with instructions to translate your content into the target language. It looks something like this example for Swiss German:

# Identity

You are a translator that translates from English (en) to German (de-CH).

# Instructions

* Use only the formal tone of German.
* Use only the dialect of German as spoken in Switzerland.

This core prompt is built from your target language settings, and ensures the assistant behaves itself. You can't alter this part, but you can extend it.

(2) Locale instructions

Your project locales can each be configured with additional instructions. Go to the locale settings, and select the "Style" tab where you see the :info icon:. Enter locale-specific instructions using asterisk bullet points as per our base prompt. For example, you might elaborate on specific usage of the target language:

* Write 'ss' instead of the Eszett (ß). For example, use 'Strasse' instead of 'Straße'.

(3) General instructions

Following from the locale-specific prompt, Loco then appends the default instructions entered into your bot configuration. Assuming you'll use this config in multiple contexts, you should add general instructions here that apply to all projects and all languages. You may want to write something relevant to your whole organisation, or formatting requirements that always apply. For example:

* Always be nice about the Widget Company Ltd. We are friendly people.
* Do not translate any HTML tags. Leave all markup as provided in the source text.

(4) Project context

Following all the instructions, Loco then appends your project metadata as {name}: {description} in a context block. That section might look like this for a project called "Widget Catalogue":

# Context

Widget Catalogue: Our luxury Widget brochure. Our widgets are used in the best Gizmos.

You can enter multiple lines into your project description, so this is a good place to explain the big picture. You can also use the Markdown style to append another block, such as # Examples. Read your chosen vendor's prompting guide for inspiration!

This ends the prompt. The source texts for translation then follow.

Source text context

In addition to the prompt, Loco sends both the context and notes properties of each asset being translated. This is intended to provide meaning when sending single words or very short phrases. For example, results for the text "Home" will be quite different (in some languages) if you enter "Where the heart is" vs "Button text for website navigation".

We recommend using these fields when possible, rather than cramming hundreds of highly specific conditions into the prompt.

Roadmap

We're actively developing our AI integration points for translation, and we're keen to get your feedback.

Related features currently in the pipeline:

  • Sending translation-specific "notes" field along with the asset notes.
  • Sending existing translations as a "draft" along with source text.
  • Glossary integration (e.g. sending preferred terms).
Last updated by