Switching LLMs

The Pieces for Sublime Text Plugin currently supports 16 different LLM models (which includes both cloud-based and local models).


Available LLMs

We constantly update and configure our plugins and extensions, like the Sublime Text Plugin, to work with the latest LLMs.

See the list of available models below.

Cloud LLMs

The cloud-based models integrate effortlessly with the Pieces Copilot, offering high-performance and real-time responses to your queries.


GPT-4o

GPT-4

GPT-4 Preview

GPT-3.5-turbo

GPT-3.5-turbo-16k

Codey / (PaLM2)

Mixtral GPU

Phi-2 GPU

(Gemini)


On-Device LLMs

We also support on-device LLMs for developers prioritizing privacy, security, or offline functionality.


CodeLlama GPU

Llama2 GPU

Llama2

Phi-2 CPU

NeuralHermes-2.5-Mistral-7B CPU

NeuralHermes-2.5-Mistral-7B GPU


How To Configure Your LLM Runtime

Switching the LLM model in the Pieces for Sublime Text Plugin is straightforward, allowing you to choose the model that best fits your needs.

To get started, use the hotkey โŒ˜+shift+p (macOS) or ctrl+shift+p (Windows / Linux) and enter Pieces: Open Pieces Settings.

This will open Pieces.sublime-settings, where you can change the model used for AI functionality and adjust Pieces settings within the Pieces for Sublime Text Plugin.

Modifying the Settings JSON Object

Copy and paste the contents of the entire .JSON object from the left window to the right window

Once youโ€™re in the right window, you can edit "model":"GPT-3.5-turbo Chat Model", to reflect any of the supported LLMs, which are commented out in the list below.

Check out our configuration page for details on other adjustable settings.

Updated on