Long-Term Memory Prompting Guide

Learn some top tricks and prompt suggestions to get the most out of the Pieces Long-Term Memory and Workstream Activities view.


Why This Matters

Over the past few years, LLM users have been figuring out how to prompt different LLMs to get the responses they want.

They've learned tricks like prompt chaining or the simple fix of adding 'Let’s think about this' for zero-shot chain-of-thought prompting.

With Pieces Long-Term Memory, we’re adding more dimensions to your prompts.

You can query across the captured memories using keywords that connect your prompts to activities you were doing, as well as prompting based off of the application you were using, or a time period, mixing and matching these as needed.

These guides introduce some of the ways you can query the databased of stored LTM context using the Pieces Copilot in the Pieces Desktop App, or any of the applications you use that have a Pieces plugin or extension.


When using these prompts, ensure you have LTM-2 turned on, both the LTM, and the LTM context source in the copilot chat.


These guides introduce some of the ways you can query the databased of stored LTM context using the Pieces Copilot in the Pieces Desktop App, or any of the applications you use that have a Pieces plugin or extension.

Click one of the cards below to jump to that guide.

General Long-Term Memory Prompting Tips

Some general prompting tips to help you get the most out of Pieces.

Updated on