Supported LLMs
We constantly update and configure our plugins and extensions, like the Pieces for JetBrains Plugin, to work with the latest LLMs.
Currently, the Pieces for JetBrains Plugin supports 2 different models—or you can use your own API key:
GPT-3.5-Turbo | GPT-4 | GPT-4-Turbo |
---|---|---|
GPT-4o | GPT-4o-mini | Gemini Pro Chat |
Gemini 1.5 Flash | Gemini 1.5 Pro | Claude 3 Haiku |
Claude 3 Opus | Claude 3 Sonnet | Claude 3.5 Sonnet |
Code Chat Bison | Chat Bison | Mistral 7B |
Phi-2 | Phi-3 Mini 128k | Phi-3 Mini 4k |
Llama 2 7B | Llama 3 8B | Gemma 1.1 7B |
Gemma 1.1 2B | Code Gemma 1.1 7B | Granite 3B & 8B |
How to Switch Models
To get started, open the Pieces Copilot chat in the side-window using any of the available methods, such as opening a Copilot Chat with an option in the tool menu, using a quick action, selecting the Pieces icon from the sidebar, and other means.
To access the LLM menu within the Copilot Chat:
-
Open the Copilot Chat view by clicking the sidebar icon
-
Look for the active model in the lower-left of the Copilot Chat view
- Then, click the active model icon and select your preferred LLM from the menu
The Pieces Copilot will utilize that model for all AI-related features—no restart or refresh needed.
Depending on your preferences and intended workflow, you can choose between cloud-hosted and local models.
Using local models allows for the flexibility to work in a completely offline environment without sacrificing Pieces Copilot.