Why This Matters
Over the past few years, LLM users have been figuring out how to prompt different LLMs to get the responses they want.
They've learned tricks like prompt chaining or the simple fix of adding 'Let’s think about this' for zero-shot chain-of-thought prompting.
With Pieces Long-Term Memory, we’re adding more dimensions to your prompts.
You can query across the captured memories using keywords that connect your prompts to activities you were doing, as well as prompting based off of the application you were using, or a time period, mixing and matching these as needed.
These guides introduce some of the ways you can query the databased of stored LTM context using the Pieces Copilot in the Pieces Desktop App, or any of the applications you use that have a Pieces plugin or extension.
When using these prompts, ensure you have LTM-2 turned on, both the LTM, and the LTM context source in the copilot chat.
Guide Links
These guides introduce some of the ways you can query the databased of stored LTM context using the Pieces Copilot in the Pieces Desktop App, or any of the applications you use that have a Pieces plugin or extension.
Click one of the cards below to jump to that guide.
Examples of the typical use cases we see for Pieces LTM with the kinds of prompts users ask.
A selection of popular use cases for the new Pieces Workstream Activity view.
Some general prompting tips to help you get the most out of Pieces.