Cross-Platform Issues
Learn about what troubleshooting steps to take if PiecesOS or the Pieces Desktop App isn’t working as expected, regardless of your operating system.
Versions & Updates
Many issues can stem from out-of-date plugins, extensions, the Pieces Desktop App, or PiecesOS itself.
Updating PiecesOS
Both PiecesOS and the Pieces Desktop Application update automatically if installed through the Pieces Suite Installer.
For standalone installations (non-macOS/Linux store-based), updates are checked daily or upon application launch, prompting you to install or delay.
See your specific OS page for platform-specific instructions on updating PiecesOS:
Updating the Pieces Desktop App
Ensuring the Desktop App is up-to-date is critical.
See your specific OS page for platform-specific update instructions on updating the Pieces Desktop App:
Connection Issues with PiecesOS
You may occasionally encounter connection issues with PiecesOS or your Personal Cloud, resulting in:
-
Pieces Copilot not generating outputs
-
Difficulty finding saved materials
-
Trouble sharing code snippets
The quickest way to resolve this basic connection issue is to restart PiecesOS, then check for updates.
Restarting PiecesOS & Checking Updates
To restart and check for updates to PiecesOS:
-
Restart PiecesOS
-
Ensure PiecesOS is running (look for the Pieces Icon in your system tray or menu bar)
-
Check for and install available updates
-
Verify that the Pieces Desktop Application and the plugin or extension you are attempting to use is up-to-date
Common Installation Issues
Common issues can occur when setting up PiecesOS and the Pieces Desktop App for the first time.
Platform-specific solutions are detailed on their respective OS pages:
Using Local Models
Running Pieces software with a local LLM can offer greater privacy, faster responses (when properly configured), and independence from cloud dependencies.
However, local models often require more robust hardware configurations and careful optimization to run smoothly. Older devices, irregardless of OS, are often incapable of utilizing hardware-demanding LLMs.
Hardware Recommendations
Local models demand more from your system than their cloud-hosted counterparts.
To ensure a stable, responsive experience, make sure your device fits these general recommendations:
-
Modern Hardware: Devices from 2021 or newer typically handle local inference more efficiently.
-
RAM & VRAM: For GPU-accelerated models, aim for a dedicated GPU with at least 6GB of VRAM. Models that push the limits of your GPU memory may fail to load or perform slowly.
-
CPU vs. GPU: If your machine does not have a sufficiently powerful GPU, consider CPU-tuned models. They tend to be slower but are more forgiving on hardware resources.
Choosing the Right Model
Select a model that matches your system’s capabilities and performance limitations, especially if you’re running an older or weaker device.
-
Lightweight Models: Opt for smaller models if you’re on older hardware or limited VRAM. These are easier to run and may still produce quality outputs for general use cases.
-
GPU-Tuned Models: If you have a strong GPU with enough VRAM, GPU-accelerated models often run faster and produce results more efficiently.
-
CPU-Tuned Models: If you lack a dedicated GPU or have low GPU memory, CPU-tuned models are a fallback option. Although slower, they can still provide consistent performance.
Local Model Crashing
If you are running into ‘hanging’ or crashing issues when attempting to power Pieces using a local LLM, it may be because of your system’s hardware.
Insufficient system resources, like RAM or VRAM may cause hiccups, slowdowns, and other glitches.
There are a few options available to you for troubleshooting:
-
Check Hardware: Verify that you have sufficient RAM, VRAM, and CPU headroom as recommended by the model.
-
Update Drivers: Run
vulkaninfo
(or a similar tool) to check for GPU or Vulkan-related errors, if you have a Vulkan-based GPU. Update your GPU drivers if you detect compatibility issues. -
Model Switching: If you experience crashes or slowdowns, try switching to a less resource-intensive local model. Reducing complexity can stabilize performance.
If you’ve tried all of these troubleshooting steps but are still experiencing crashes, hanging-time, or other instabilities, you may need to switch to a cloud-based LLM.
Vulkan-based GPUs
NVIDIA and AMD both utilize the Vulkan API framework in their GPUs, but there are known issues with using Vulkan GPUs for AI and LLM-centered workloads.
For example, a corrupted or outdated Vulkan API can cause crashes.
If you are experiencing this issue, you can check Vulkan health in your terminal or command line and scanning for errors or warning message—if there are any issues detected, update your GPU drivers.
Checking Vulkan
To check your Vulkan health status, run vulkaninfo
in your terminal or command line and look for errors or warnings.
Updating GPU Drivers
If issues are detected, update your GPU drivers to ensure Vulkan compatibility and stability.
Checking Hardware
It may be necessary to verify your system’s specifications if you experience ongoing issues.
See the OS-specific pages for instructions on how to check CPU, RAM, and GPU details: