Generative AI Conversations

Enhance your development workflow with Copilot Chats inside the Pieces for JetBrains Plugin—an interactive AI assistant integrated directly into your JetBrains IDE.


Accessing the Pieces Copilot Chat in JetBrains IDEs

There are several ways to open up the Pieces Copilot chat window in any of your JetBrains IDEs.

via Right-Click Context Menu

  1. Select some code in your active file

  2. Right-click on the high-lighted code

  3. Select Ask Pieces Copilot About... from the tool menu to open the chat with the selected context

via the Sidebar

  1. Locate and click on the Pieces for Developers icon in your sidebar tray

  2. Select the Copilot Chat tab within the Pieces tool window to open the chat interface

via Keyboard Shortcuts

  1. Press ctrl+shift+a (Windows/Linux) or ⌘+shift+a (macOS) to open the Search Everywhere window

  2. Type "Pieces Copilot" and select it from the search results to open the chat

Adding Conversation Context

You can add different types of materials, like entire folders, specific snippets, and websites as context to your Pieces Copilot conversation.

This greatly boosts the number generative AI responses your receive that will actually benefit you, as it can provide hyper-specific responses because it’s contextually aware of your code.

To add context to conversations, start by clicking the folder icon in the bottom-left corner of the Pieces Copilot side view—then, add folders, snippets, or files as needed.

You can also right-click a file from your project or active file tree and add that file as context to the conversation.

This can be done without even opening the Pieces Copilot window. Simply right-click on a file in your open project and select Add to Conversation Context.

Adding Code Snippets as Context

You can paste snippets of code as a code block inside of any Copilot Chat by clicking the { } icon inside the chat window, then pasting in your code.

This is useful for bringing in code that isn’t present immediately in the active file as context, or for comparisons and suggestions.

Extracting Code from Screenshots

You can also extract code from screenshots directly from the Copilot chat menu by selecting Extract Code from Screenshot, selecting the desired screenshot from your Finder (macOS) or File Explorer (Windows/Linux) menu, and confirming.

Pieces Copilot will then scan the screenshot and generate the code captured from the image into the chat, from which you can copy, insert at your cursor, save it as a snippet, and more.

AI Quick Actions

Above functions in your code, you can find clickable Pieces: Comment and Pieces: Explain buttons.

Click Pieces: Explain to open up the Pieces Copilot in the side window. The Pieces Copilot will automatically explain the purpose and function of that code within the chat.

Similarly, you can click Pieces: Comment above a function to have your preferred LLM generate documentation for that piece of code—you can then insert that code directly at the cursor by clicking Insert at Cursor or save it as a snippet using the built-in Save to Pieces button.

Using the Pieces Copilot Chat to Boost Productivity

The Pieces Copilot Chat is designed to assist you with various coding tasks to boost productivity and enhance your workflow. This is done primarily by eliminating context-switching (needing to leave your IDE to access generative AI).

Asking Coding Questions

Developers often encounter questions about efficient implementations or obscure syntax when working with advanced data structures or algorithms.

The Pieces Copilot can help you understand some of these complexities.

Let’s take a look at a few examples of how the Pieces for JetBrains Plugin can break through some typical blockers:

Optimizing Code & Gaining Insights

For example, if you’re implementing a complex caching mechanism using memoization in Python and want to figure out a better way to optimize it, you could ask this specific question based on an existing function you’ve just written:

How can I further optimize this caching function, and are there any memory implications of using @lru_cache?

from functools import lru_cache

@lru_cache(maxsize=1024)
def compute_heavy_task(x, y):
    # Complex nested operations and recursive calls
    result = (x ** y + y ** x) / (x * y)
    for i in range(1, 10000):
        result += (x + i) / (y + i)
    return result

Here, the Pieces Copilot would analyze your selected code (the function), offer some insights into the limits of using lru_cache and potentially alternative caching strategies if it thinks that there could be memory impacts due to the large cache sizes.

It could also suggest an additional function or edits to the existing function to handle bottlenecking—and so on.

Improving Security & Error Handling

Maintaining well-secured code is critical in high-security applications or scripts that contain sensitive data and process transactions. The Pieces Copilot can help you improve your code by analyzing your code’s reliability and security.

Take this transactional script, for example:

// transactionService.js
async function processTransaction(accountId, amount, token) {
    if (!authenticateToken(token)) {
        throw new Error('Invalid authentication token');
    }

    let balance = await getAccountBalance(accountId);
    if (balance < amount) {
        throw new Error('Insufficient funds');
    }

    try {
        balance -= amount;
        await updateBalance(accountId, balance);
        return { success: true, message: 'Transaction completed' };
    } catch (error) {
        return { success: false, message: 'Transaction failed', error: error.message };
    }
}

function authenticateToken(token) {
    return token === process.env.SECURE_TOKEN;
}

This JavaScript code is seriously lacking in the security and efficiency departments and would be considered a ‘high-risk’ code.

Try passing this through the Pieces Copilot with the following question:

How can I improve the security and reliability of this transaction function? Are there any patterns I should follow for error handling and rollback?

Using a stronger method, like HMAC (Hash-Based Message Authentication Code), you can ensure that tokens are securely validated, reducing the risk of unauthorized access.

Also, if a database error occurs during updateBalance, the function does not roll back the transaction, which might leave the account balance in an inconsistent state. Adding a rollback mechanism or using a transactional approach—like a database transaction session—ensures that if updateBalance fails, no changes are saved, and the balance remains accurate.

This is an extremely valuable tool to have at your disposal while you’re coding, especially since you don’t need to leave your IDE and can use whatever LLM you prefer (whether local or cloud-based).

Updated on