Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Renaming live context to long term memory #591

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Method | HTTP request | Description

/workstream_pattern_engine/processors/vision/activate [POST]

This will activate your Workstream Pattern Engine. This is used to aggregate information on your user's desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April)
This will activate your Long-Term Memory Engine. This is used to aggregate information on your user's desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April)

### Example {#workstream_pattern_engine_processors_vision_activate-example}

Expand Down Expand Up @@ -88,7 +88,7 @@ No authorization required

/workstream_pattern_engine/processors/vision/data/clear [POST]

This will clear the data for the Workstream Pattern Engine, specifically for our vision data. This boy will accept ranges of time that the user wants to remove the processing from.
This will clear the data for the Long-Term Memory Engine, specifically for our vision data. This boy will accept ranges of time that the user wants to remove the processing from.

### Example {#workstream_pattern_engine_processors_vision_data_clear-example}

Expand Down Expand Up @@ -155,7 +155,7 @@ No authorization required

/workstream_pattern_engine/processors/vision/deactivate [POST]

This will deactivate your Workstream Pattern Engine. This is used to aggregate information on your user's desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April)
This will deactivate your Long-Term Memory Engine. This is used to aggregate information on your user's desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April)

### Example {#workstream_pattern_engine_processors_vision_deactivate-example}

Expand Down Expand Up @@ -224,7 +224,7 @@ No authorization required

/workstream_pattern_engine/processors/vision/status [GET]

This will get a snapshot of the status your Workstream Pattern Engine. This is used to aggregate information on your user's desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April)
This will get a snapshot of the status your Long-Term Memory Engine. This is used to aggregate information on your user's desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April)

### Example {#workstream_pattern_engine_processors_vision_status-example}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: QGPTConversationPipeline | Python SDK

# QGPTConversationPipeline

This model is specifically for QGPT Conversation pipelines, the model is used to group conversational prompts for instance a conversation around generating code. here are some use cases- 1. contextualized_code_generation- This is for users that want to have conversations around generating code, when there is provided context. 2. generalized_code- This is for users that want to have conversations without context around code. 3. contextualized_code- This is for users that want to have conversation around code with Context. 4. contextualized_code_workstream: this is for the users that want to use context as well as WPE information in their chat (we wiil prioritize WPE infomration, but also support other info as well)
This model is specifically for QGPT Conversation pipelines, the model is used to group conversational prompts for instance a conversation around generating code. here are some use cases- 1. contextualized_code_generation- This is for users that want to have conversations around generating code, when there is provided context. 2. generalized_code- This is for users that want to have conversations without context around code. 3. contextualized_code- This is for users that want to have conversation around code with Context. 4. contextualized_code_workstream: this is for the users that want to use context as well as LTME information in their chat (we will prioritize LTME information, but also support other info as well)

## Properties

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: QGPTConversationPipelineForContextualizedCodeWorkstreamDialog | Python SD

# QGPTConversationPipelineForContextualizedCodeWorkstreamDialog

This is for the users that wants to have contextualized code conversations around their workstream materials, meaning conversations around code with Context provided, as well as workstream information ie information gathered from the WPE. This is a class so that we can add optional properties in the future.
This is for the users that wants to have contextualized code conversations around their workstream materials, meaning conversations around code with Context provided, as well as workstream information ie information gathered from the LTME. This is a class so that we can add optional properties in the future.

## Properties

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ Method | HTTP request | Description
## **workstreamPatternEngineProcessorsVisionActivate** {#workstreampatternengineprocessorsvisionactivate}
> WorkstreamPatternEngineStatus workstreamPatternEngineProcessorsVisionActivate()

This will activate your Workstream Pattern Engine. This is used to aggregate information on your user\'s desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April)
This will activate your Long-Term Memory Engine. This is used to aggregate information on your user\'s desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April)

### Example {#workstreampatternengineprocessorsvisionactivate-example}

Expand Down Expand Up @@ -65,7 +65,7 @@ Name | Type | Description | Notes
## **workstreamPatternEngineProcessorsVisionDataClear** {#workstreampatternengineprocessorsvisiondataclear}
> workstreamPatternEngineProcessorsVisionDataClear()

This will clear the data for the Workstream Pattern Engine, specifically for our vision data. This boy will accept ranges of time that the user wants to remove the processing from.
This will clear the data for the Long-Term Memory Engine, specifically for our vision data. This boy will accept ranges of time that the user wants to remove the processing from.

### Example {#workstreampatternengineprocessorsvisiondataclear-example}

Expand Down Expand Up @@ -113,7 +113,7 @@ void (empty response body)
## **workstreamPatternEngineProcessorsVisionDeactivate** {#workstreampatternengineprocessorsvisiondeactivate}
> WorkstreamPatternEngineStatus workstreamPatternEngineProcessorsVisionDeactivate()

This will deactivate your Workstream Pattern Engine. This is used to aggregate information on your user\'s desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April)
This will deactivate your Long-Term Memory Engine. This is used to aggregate information on your user\'s desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April)

### Example {#workstreampatternengineprocessorsvisiondeactivate-example}

Expand Down Expand Up @@ -161,7 +161,7 @@ Name | Type | Description | Notes
## **workstreamPatternEngineProcessorsVisionStatus** {#workstreampatternengineprocessorsvisionstatus}
> WorkstreamPatternEngineStatus workstreamPatternEngineProcessorsVisionStatus()

This will get a snapshot of the status your Workstream Pattern Engine. This is used to aggregate information on your user\'s desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April)
This will get a snapshot of the status your Long-Term Memory Engine. This is used to aggregate information on your user\'s desktop, specifically recording the application in focus and aggregating relevant context that will then be used to ground the copilot conversations, as well as the feed. Note: required to be a beta user to use this feature until this is live(roughly mid to late April)

### Example {#workstreampatternengineprocessorsvisionstatus-example}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ title: QGPTConversationPipeline | TypeScript SDK

# QGPTConversationPipeline

This model is specifically for QGPT Conversation pipelines, the model is used to group conversational prompts for instance a conversation around generating code. here are some use cases- 1. contextualized_code_generation- This is for users that want to have conversations around generating code, when there is provided context. 2. generalized_code- This is for users that want to have conversations without context around code. 3. contextualized_code- This is for users that want to have conversation around code with Context. 4. contextualized_code_workstream: this is for the users that want to use context as well as WPE information in their chat (we wiil prioritize WPE infomration, but also support other info as well)
This model is specifically for QGPT Conversation pipelines, the model is used to group conversational prompts for instance a conversation around generating code. here are some use cases- 1. contextualized_code_generation- This is for users that want to have conversations around generating code, when there is provided context. 2. generalized_code- This is for users that want to have conversations without context around code. 3. contextualized_code- This is for users that want to have conversation around code with Context. 4. contextualized_code_workstream: this is for the users that want to use context as well as LTME information in their chat (we will prioritize LTME information, but also support other info as well)

## Properties

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ title: QGPTConversationPipelineForContextualizedCodeWorkstreamDialog | TypeScrip

# QGPTConversationPipelineForContextualizedCodeWorkstreamDialog

This is for the users that wants to have contextualized code conversations around their workstream materials, meaning conversations around code with Context provided, as well as workstream information ie information gathered from the WPE. This is a class so that we can add optional properties in the future.
This is for the users that wants to have contextualized code conversations around their workstream materials, meaning conversations around code with Context provided, as well as workstream information ie information gathered from the LTME. This is a class so that we can add optional properties in the future.

## Properties

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Join us for an exciting live Ask Me Anything (AMA) session where the Pieces team

Plus, learn how you can create your own Pieces integration with our multilingual open-source SDKs, example projects, and brand-new documentation site.

Finally, we’ll be giving you a sneak peek into our Workstream Pattern Engine technology which enables you to contextualize your Pieces Copilot from every tool in your workflow, making traditional extensibility a thing of the past.
Finally, we’ll be giving you a sneak peek into our Long-Term Memory Engine technology which enables you to contextualize your Pieces Copilot from every tool in your workflow, making traditional extensibility a thing of the past.

Register now and send us your questions ahead of time - your thoughts and opinions mean the world to us!

Expand All @@ -34,7 +34,7 @@ Register now and send us your questions ahead of time - your thoughts and opinio
- **Feature Demo:** Get a first look at the current features, use cases for the extension, and sneak peeks at what’s coming next to elevate your coding experience.
- **Open Source Essentials:** Explore our newly minted SDK Docs and hands-on example projects to help get you started.
- **Python CLI Agent:** Discover the AI capabilities and opportunities of our new open-source CLI Agent, a new way to interact with your code and context.
- **Workstream Pattern Engine:** Experience the future of AI assistance with our new technology (currently in Beta) that understands your entire workflow.
- **Long-Term Memory Engine:** Experience the future of AI assistance with our new technology (currently in Beta) that understands your entire workflow.

## 💡 Who Should Attend? {#who-should-attend}
This AMA is perfect for developers, tech enthusiasts, and anyone keen on enhancing their coding workflow. Whether you're a seasoned pro or just starting out, there's something for everyone.
Expand Down
24 changes: 13 additions & 11 deletions docs/community/events/ama/live-context-security-and-privacy.mdx
Original file line number Diff line number Diff line change
@@ -1,41 +1,43 @@
---
title: AMA - Security & Privacy of Live Context in Pieces Copilot+
description: Join us on Tuesday, June 18 at 12:00pm EST for a technical deep dive on our Live Context feature, and the security and privacy implications behind it.
title: AMA - Security & Privacy of Long-Term Memory in Pieces Copilot+
description: Join us on Tuesday, June 18 at 12:00pm EST for a technical deep dive on our Long-Term Memory feature, and the security and privacy implications behind it.
displayed_sidebar: docsSidebar
---

import CTAButton from "/src/components/CTAButton";
import SocialIcons from "/src/components/SocialIcons";
import {MiniSpacer} from "/src/components/Spacers";

# Under the Hood with Pieces: Deep Dive into the Security & Privacy of Live Context in Pieces Copilot+
# Under the Hood with Pieces: Deep Dive into the Security & Privacy of Long-Term Memory in Pieces Copilot+

> Live Stream Event - Tuesday, June 18, 12:00pm EST

![Live Context Security & Privacy AMA](/ama/live-context-security-and-privacy.png)
![Long-Term Memory Security & Privacy AMA](/ama/live-context-security-and-privacy.png)

> Note: Long-Term Memory is the new name for Pieces Live Context. You may still see this older name in our videos and documentation.

## 🚀 Event Overview {#event-overview}
Just two days after the [announcement of our Live Context feature](https://www.youtube.com/watch?v=aP8u95RTCGE) in Pieces Copilot+, Microsoft announced their Copilot+ PC with “photographic memory” which “helps you remember things you may have forgotten”.
Just two days after the [announcement of our Long-Term Memory feature](https://www.youtube.com/watch?v=aP8u95RTCGE) in Pieces Copilot+, Microsoft announced their Copilot+ PC with “photographic memory” which “helps you remember things you may have forgotten”.

This launch unexpectedly brought many questions about the security and privacy of AI engines at the operating-system level, with many developers and organizations concerned about their sensitive data.

In this AMA (Ask Me Anything) live stream event, we’ll be uncovering the tech behind our Live Context feature powered by the Workstream Pattern Engine (WPE) to showcase our commitment to air-gapped developer experiences, and discuss the benefits of an offline-first approach to using AI to remember the right things, not everything.
In this AMA (Ask Me Anything) live stream event, we’ll be uncovering the tech behind our Long-Term Memory feature powered by the Long-Term Memory Engine (LTME) to showcase our commitment to air-gapped developer experiences, and discuss the benefits of an offline-first approach to using AI to remember the right things, not everything.

It was far from easy to develop this feature without relying on traditional cloud-based recording methods that can lead to security vulnerabilities, but ultimately we were able to create the WPE technology which works across all major operating systems, operates on-device and in real-time for extremely robust security and privacy, avoids network latency, liability of data, and expensive cloud costs, and enables developers to 10x their productivity.
It was far from easy to develop this feature without relying on traditional cloud-based recording methods that can lead to security vulnerabilities, but ultimately we were able to create the LTME technology which works across all major operating systems, operates on-device and in real-time for extremely robust security and privacy, avoids network latency, liability of data, and expensive cloud costs, and enables developers to 10x their productivity.

Learn more about our development journey and what this means for your workflow by registering for the live stream!

<CTAButton href={'https://getpieces.typeform.com/to/OvKdlD2r'} label={'Register Now'} type={'primary'} />

## 🛠 What You'll Learn {#what-youll-learn}
- **Security & Privacy First:** Discover how Live Context operates entirely on-device, ensuring your workflow data never leaves your computer.
- **Workstream Pattern Engine:** Understand the technology that shadows your workflow, capturing context locally across macOS, Windows, and Linux.
- **Behind the Scenes:** Get insights into the algorithms and models that power Live Context, including intelligent visual snapshots, OCR models, and the summarization & redaction step.
- **Security & Privacy First:** Discover how Long-Term Memory operates entirely on-device, ensuring your workflow data never leaves your computer.
- **Long-Term Memory Engine:** Understand the technology that shadows your workflow, capturing context locally across macOS, Windows, and Linux.
- **Behind the Scenes:** Get insights into the algorithms and models that power Long-Term Memory, including intelligent visual snapshots, OCR models, and the summarization & redaction step.
- **Local LLM Execution:** Learn how we leverage on-device LLM runtimes to keep your data private, with no need for cloud-based processing.

## 💡 Why Attend? {#why-attend}
- **Interactive Q&A:** Our founder and key engineers will be on hand to answer your questions live.
- **In-Depth Technical Breakdown:** Gain a deeper understanding of how Live Context seamlessly integrates into your development workflow while maintaining top-tier security.
- **In-Depth Technical Breakdown:** Gain a deeper understanding of how Long-Term Memory seamlessly integrates into your development workflow while maintaining top-tier security.
- **Community Engagement:** Share your thoughts, feedback, and ideas to help us refine this feature for all developers.

## ❓ How to Participate {#how-to-participate}
Expand Down
Loading