LM Studio integration

⌘K
  1. Home
  2. Add-on
  3. ChatGPT Translator
  4. LM Studio integration

LM Studio integration

Documentation: Using Local AI Models (LLMs) with LM Studio

You can use Translator++ with your own local AI models, often called Large Language Models (LLMs). This allows for private, offline, and potentially cost-free machine translation.

This guide explains how to use LM Studio to run any compatible LLM on your computer and connect it to Translator++ through our OpenAI Chat add-on.

Overview

LM Studio is a free application that allows you to download and run various open-source LLMs on your personal computer. It features a built-in server that mimics the OpenAI API, which means you can use the flexible “OpenAI Chat” add-on in Translator++ to send requests to the model running on your own machine.

This method works for many popular models, including Sugoi-14B, Llama, DeepSeek, Gemma, and any other model available in the GGUF format.


Video Tutorial

This video provides a complete walkthrough of the process using the Sugoi LLM as an example.


Step-by-Step Guide

Prerequisites

  1. Translator++: You need to have Translator++ installed.
  2. LM Studio: Download and install it from https://lmstudio.ai/.
  3. An LLM Model File: You can either download a model from within LM Studio or have a pre-downloaded model file (in .gguf format).

Step 1: Set up the Model in LM Studio

  1. Download your model. You can find many models in the Search tab within LM Studio. If you have a model file already (like for the Sugoi LLM), you can skip to the next point.
  2. Place the model file in the right folder.
    • Go to the “My Models” tab (folder icon) in LM Studio.
    • Click the “Open in File Explorer” (or Finder on Mac) option next to the folder path.
    • Move or copy your .gguf model file into this folder. For better organization, you can create a sub-folder for each model, as shown in the video.
  3. Verify the model is detected. Once you place the file, LM Studio should automatically detect it and list it in the “My Models” tab.

Step 2: Start the Local Server

  1. Go to the Local Server tab in LM Studio (icon looks like <->).
  2. At the top, select the model you want to use from the dropdown list.
  3. The model will begin loading into your computer’s memory (RAM/VRAM).
  4. Once it’s loaded, click the Start Server button.
  5. The server will start, and you will see a “Reachable at:” address listed, which is usually http://127.0.0.1:1234. Copy this URL. You will also see a log of server activity below.

Step 3: Configure the OpenAI Chat Add-on in Translator++

  1. Open Translator++ and go to Options > OpenAI Chat.
  2. Click the plus (+) icon to create a new preset. Name it something descriptive, like “LM Studio – Sugoi LLM”.
  3. In the OpenAI API settings, configure the following:
    • Target URL: Paste the URL you copied from LM Studio (e.g., http://127.0.0.1:1234/v1/chat/completions). Make sure it ends with /v1/chat/completions.
    • Your OpenAI API Key: You can leave this field empty.
    • Model: You can also leave this blank, as the model is selected in LM Studio.
  4. Important: Scroll down to the advanced settings. Local models can be easily overwhelmed by too many simultaneous requests. We recommend these settings for stability:
    • Max concurrent requests: Set this to 1.
    • Batch delay: Set this to 2000 (milliseconds) or higher to give your computer time between requests.
  5. Click Quick Save.

Step 4: Translate!

You are now ready to translate.

  1. Select the text you want to translate in your project.
  2. Right-click and choose Batch translation > Send to….
  3. In the “Select Translator” dropdown, choose OpenAI Chat.
  4. A second dropdown will appear. Select the preset you just created (e.g., “LM Studio – Sugoi LLM”).
  5. Click Translate Now.

Translator++ will send the text to your local LM Studio server, which will process it using your chosen AI model and return the translation. You can monitor the progress in the LM Studio server logs. The speed will depend heavily on the power of your GPU.

Tags , , , , , ,
Was this article helpful to you? No Yes

How can we help?