Introduction to Pieces Copilot

Pieces Copilot is your primary interface for interacting with Pieces’ generative AI. It allows you to ask technical questions, generate code snippets, request debugging assistance, and receive insights—all within a familiar chat environment.


Overview

Pieces Copilot enables you to interact with advanced generative AI (both cloud-hosted and on-device), engage in technical chats, generate and debug code, and access your workflow context effortlessly.

It’s designed for developers, designers, knowledge workers, and enterprise teams alike.

On this page, you’ll find a brief overview of the key capabilities of Pieces Copilot.

For more in-depth guidance, please explore our dedicated subpages:

  • Context & Project Integration: Discover how to enrich your chats by providing context from your local files, folders, saved materials, and websites.
  • Configuring Pieces Copilot: Understand how to manage your LLM runtime, choose between cloud and on-device models, and customize settings to fit your workflow.

Pieces Copilot | Main View

When you launch the Pieces for Developers Desktop App, you are greeted by the Pieces Copilot View—a dynamic, context-rich chat interface designed to help you interact with powerful generative AI, manage your project context, and streamline your coding workflow.

Inside the Pieces Copilot view, you can:

  1. Interact with advanced local and cloud-hosted LLMs for multi-purpose generative AI needs.

  2. Leverage Long-Term Memory (LTM-2) context captured by PiecesOS to enhance AI responses.

  3. Attach and manage folders, files, saved materials, and websites as chat context.

  4. Search and add saved code snippets as additional context to Pieces Copilot chats.

Interacting with Pieces Copilot

We’ll walk you through the main Copilot Chat window and touch on all of the main elements you can interact with here.

You’ll learn how to interact with Pieces Copilot, utilize flexible Suggested Prompts when starting new chats, enable or disable Long-Term Memory context, add individual items to the chat, and discover other productivity-centric Quick Actions.

Context & Project Integration

One of Pieces Copilot’s key advantages is context awareness.

By integrating local folders, files, and your saved code snippets, you can significantly boost the relevance and accuracy of the Pieces Copilot’s AI responses.

In this section, we'll cover context and project management—adding items from Pieces Drive or your device as chat context, offering real-world examples to reduce context switching, and showing how to adjust the AI's understanding of your environment or workflow.

Configuring Pieces Copilot

Pieces Copilot offers flexibility in choosing the AI model (cloud-based or local) and customizing the chat appearance and default context usage.

Discover the array of 40+ cloud-hosted and local models, served through Ollama or other providers, learn how to adjust your runtime, customize the appearance of your Pieces Copilot Chat view, enable or disable LTM context for new chats, and more.

Pieces Copilot in Multiple Environments

If you don't want the wide range of tools designed to boost your productivity and reduce context switching in your workflow with the Pieces Desktop App—that’s fine.

You can still find Pieces for Developers plugins & extensions available in your favorite collaboration tools, text editors, and most importantly, IDEs.

We’ll go over cross-platform consistency with context, history, and usage synchronization through PiecesOS, cross-threaded use case scenarios, link you to other Pieces software, and more.


Pieces Copilot in JetBrains IDEs


Get Started with Pieces

Click one of the links below to be redirected to your platform-specific (OS) download and installation Quickstart page:

Updated on