554 Commits

Author SHA1 Message Date
n4ze3m
0226de7c39 feat: Add vector search to SearXNG search provider
chore: remove unnecessary type annotation in web.ts
2024-12-01 17:02:09 +05:30
n4ze3m
5687517238 feat: Add save functionality for system prompt and model settings 2024-12-01 15:52:52 +05:30
n4ze3m
77f0cdbb35 fix: remove generateHistory call from useMessage hook 2024-12-01 15:32:51 +05:30
n4ze3m
c8c71f69cc feat: Add SearXNG search provider and settings 2024-12-01 00:23:17 +05:30
n4ze3m
138e41e122 chore: bump manifest version to 1.3.6 2024-11-30 20:17:31 +05:30
n4ze3m
6d80798da9 feat: Add useMMap option to model settings 2024-11-30 20:17:03 +05:30
n4ze3m
e5e04c3674 feat: Add option to resume last chat when opening Web UI 2024-11-30 19:04:45 +05:30
Muhammed Nazeem
8d12e9152c
Merge pull request #260 from farooqpk/feature/ollama-url-input
feat: Set Ollama URL from ollamaInfo in PlaygroundEmpty
2024-11-27 12:57:27 +05:30
Muhammed Nazeem
82246ed5e1
Merge pull request #258 from Abubakar115e/main
Spell fixes for Norwegian
2024-11-27 12:56:16 +05:30
farooqpk
38979d979b feat: Set Ollama URL from ollamaInfo in PlaygroundEmpty 2024-11-27 12:40:05 +05:30
Abubakar115e
7f94d57463
Merge branch 'n4ze3m:main' into main 2024-11-26 19:49:59 +01:00
Abubakar115e
9b14710cab Merge branch 'main' of https://github.com/Abubakar115e/page-assist 2024-11-26 19:38:14 +01:00
n4ze3m
2452259189 docs(connection-issue): add steps to resolve Ollama connection issues
- Add new steps to resolve Ollama connection issues when directly accessed from the browser extension
- Instruct users to set `OLLAMA_HOST=0.0.0.0` environment variable and restart Ollama
- Provide additional troubleshooting guidance for users still facing issues
2024-11-25 10:08:46 +05:30
Muhammed Nazeem
7e796e7c58
Merge pull request #251 from n4ze3m/next
v1.3.5
2024-11-23 17:36:33 +05:30
n4ze3m
2c12b17dda feat(vision): add vision chat mode
- Add new "vision" chat mode to the application
- Implement the `visionChatMode` function to handle vision-based chat interactions
- Update the UI to include a new button to toggle the vision chat mode
- Add new translations for the "vision" chat mode tooltip
- Disable certain UI elements when the vision chat mode is active
2024-11-23 14:04:57 +05:30
n4ze3m
edc5380a76 feat(i18n): add Ukrainian translations for Playground and Settings
- Add new "welcome" string to the Ukrainian Playground translation
- Add new "ollamaStatus" setting to the Ukrainian Settings translation
2024-11-23 11:53:13 +05:30
Muhammed Nazeem
0421f66f92
Merge pull request #254 from vlisivka/main
Ukrainian translation for Page Assist is added.
2024-11-22 21:56:37 +05:30
Volodymyr M. Lisivka
73306d95db Ukrainian translation for Page Assist is added. 2024-11-22 17:45:39 +02:00
n4ze3m
eaf0e5b241 feat(NewChat): improve button styles and layout
- Update the styles and layout of the "New Chat" and dropdown buttons in the NewChat component
- Rounded the buttons to match the design
- Adjusted the spacing and alignment of the button elements
- Ensured consistent styling between light and dark modes
2024-11-17 12:29:41 +05:30
n4ze3m
92013f3bfc feat(settings): add Ollama connection status check setting
- Add new setting to enable/disable Ollama connection status check
- Update translations for the new setting across all supported languages
2024-11-17 12:26:14 +05:30
n4ze3m
ca26e059eb feat: Improve memory embedding and vector store handling
This commit includes the following improvements:

- Update the `memoryEmbedding` function to use the `PAMemoryVectorStore` instead of the generic `MemoryVectorStore`. This ensures that the vector store is specifically designed for the Page Assist application.
- Modify the `useMessage` hook to use the `PAMemoryVectorStore` type for the `keepTrackOfEmbedding` state.
- Update the `rerankDocs` function to use the `EmbeddingsInterface` type instead of the deprecated `Embeddings` type.
- Add a new `PageAssistVectorStore` class that extends the `VectorStore` interface and provides a custom implementation for the Page Assist application.

These changes improve the handling of memory embeddings and vector stores, ensuring better compatibility and performance within the Page Assist application.
2024-11-16 19:33:51 +05:30
n4ze3m
64e88bd493 feat: Add support for chat mode in MessageSource component
This commit extends the `MessageSource` component to support the `chat` mode in addition to the existing `rag` mode. This allows the component to handle and display messages from both chat and RAG (Rapid Automated Generation) sources.
2024-11-16 19:33:20 +05:30
n4ze3m
726d3e3427 feat: Improve PageAssistSelect component
This commit enhances the `PageAssistSelect` component with the following improvements:

- Adds a `ref` to the options container to automatically scroll to the selected option when the dropdown is opened.
- Fixes an issue where the `selectedOption` was not being correctly determined when the `options` array was updated.
- Improves the rendering of the selected option, ensuring that the loading state, placeholder, and selected option are displayed correctly.
- Adds a `data-value` attribute to the option elements to facilitate scrolling to the selected option.

These changes improve the overall user experience and functionality of the `PageAssistSelect` component.
2024-11-16 19:33:04 +05:30
n4ze3m
c4d9e3aeed feat: Add page assist select component to header
This commit introduces a new `PageAssistSelect` component to the header, which replaces the previous `Select` component for selecting the active chat model. The new component provides improved functionality, including:

- Ability to display provider icons alongside the model name
- Truncation of long model names to ensure they fit within the available space
- Improved loading state handling
- Ability to refresh the model list on demand

These changes enhance the user experience and make it easier for users to quickly select the desired chat model.
2024-11-16 15:50:11 +05:30
n4ze3m
4292dc45ea feat: Bump application version to 1.3.5
Updates the application version in the wxt.config.ts file from 1.3.4 to 1.3.5.
2024-11-16 15:42:31 +05:30
Muhammed Nazeem
dd6ef49698
Merge pull request #242 from n4ze3m/next
v1.3.4
2024-11-10 19:54:25 +05:30
n4ze3m
5678a0f8b2 feat: Add Korean language support
Add Korean language support to the application. This includes translating the necessary UI elements and adding Korean to the supported language list.
2024-11-10 19:34:38 +05:30
n4ze3m
a96193bbf8 feat: Add error handling for updating message by index
Adds error handling to the `updateMessageByIndex` function to prevent the temporary chat from breaking when an error occurs during the update process. This ensures a more robust and reliable experience for users.
2024-11-10 18:14:44 +05:30
n4ze3m
0f75de02cb Fix: Update temporary chat history
This commit addresses an issue where temporary chat history was not being updated correctly when using voice input.

The `setHistoryId` function is now called within the `saveMessageOnError` function to ensure that the history ID is set correctly when a message is saved.
2024-11-10 18:08:05 +05:30
n4ze3m
55f3838b6d feat: Improve ollama2 model fetching
This commit introduces a more efficient approach to fetching ollama2 models, ensuring proper filtering and handling of providers. This enhances the robustness and reliability of the model loading process, streamlining the overall user experience.
2024-11-10 17:30:33 +05:30
n4ze3m
2409ebc75d feat: Add Ollama and Llamafile to dynamic model fetching
Expanded the list of providers for which models are fetched dynamically to include Ollama and Llamafile, removing the need for manual model addition in the user interface for these providers. This simplifies the user experience and ensures users always have access to the latest models without manual intervention.
2024-11-10 15:38:03 +05:30
n4ze3m
a7f461da0b feat: Add multiple Ollama support
Adds support for using Ollama 2 as a model provider. This includes:

- Adding Ollama 2 to the list of supported providers in the UI
- Updating the model identification logic to properly handle Ollama 2 models
- Modifying the model loading and runtime configuration to work with Ollama 2
- Implementing Ollama 2 specific functionality in the embedding and chat models

This change allows users to leverage the capabilities of Ollama 2 for both embeddings and conversational AI tasks.
2024-11-10 15:31:28 +05:30
n4ze3m
c6a62126dd feat: Add LlamaFile support
Add support for LlamaFile, a new model provider that allows users to interact with models stored in LlamaFile format. This includes:

- Adding an icon for LlamaFile in the provider selection menu.
- Updating the model provider selection to include LlamaFile.
- Updating the model handling logic to properly identify and process LlamaFile models.
- Updating the API providers list to include LlamaFile.

This enables users to leverage the capabilities of LlamaFile models within the application.
2024-11-10 14:02:44 +05:30
n4ze3m
f52e3d564a Fix: Handle image URLs in custom model responses
Improves the formatting of image URLs within responses from custom models.  This ensures that image URLs are correctly presented in the user interface.
2024-11-10 13:29:41 +05:30
n4ze3m
9c7a3f5ddc fix: Prevent errors from optional fields in chat history and chunk content
The code was relying on optional fields like `content` in chat history and chunk objects, leading to potential errors if these fields were missing. This commit ensures proper handling of these fields by adding optional chaining (`?.`) for safer access. This prevents crashes and ensures the application handles the missing fields gracefully.
2024-11-10 12:37:48 +05:30
n4ze3m
f8791a0707 feat: Introduce temporary chat mode
Adds a new "Temporary Chat" mode for quick, non-persistent conversations. The new mode is available in the header bar and will trigger a visually distinct chat experience with a temporary background color. Temporary chats do not save to the chat history and are meant for short, one-off interactions. This feature enhances flexibility and provides a more convenient option for users who need to quickly interact with the AI without committing the conversation to their history.
2024-11-09 19:10:34 +05:30
n4ze3m
8fbdfc35d3 feat(settings): Use selected system prompt in current chat model temp system fallback
Adds support for using the currently selected system prompt in the current model settings. This allows users to fine-tune their chat experience based on their preferred prompt.
2024-11-09 18:09:47 +05:30
n4ze3m
88d0cb68ae feat: Support for new AI capabilities
Adds support for the new AI capabilities in Chrome. This change includes updated logic for checking availability and creating text sessions.
2024-11-09 17:58:23 +05:30
n4ze3m
977723f71f feat: Ability to send image without text 2024-11-09 17:13:23 +05:30
n4ze3m
fd654cafdb feat: Add max tokens setting for model generations
Adds a new setting to control the maximum number of tokens generated by the model. This provides more control over the length of responses and can be useful for limiting the amount of text generated in certain situations.
2024-11-09 16:56:47 +05:30
n4ze3m
7c805cfe22 feat: Include generation info in message history
Adds the `generationInfo` field to the message history output to provide more context about the message's origin. This will be helpful for debugging and understanding the provenance of messages.
2024-11-09 15:20:16 +05:30
n4ze3m
9f383a81b6 feat: Add generation info to messages
This commit introduces a new feature that displays generation information for each message in the chat.

The generation info is displayed in a popover and includes details about the model used, the prompt, and other relevant information. This helps users understand how their messages were generated and troubleshoot any issues that may arise.

The generation info is retrieved from the LLM response and is stored in the database alongside other message details.

This commit also includes translations for the generation info label in all supported languages.
2024-11-09 15:17:59 +05:30
Muhammed Nazeem
9ecc8c4e75
Merge pull request #238 from hkjang/korean-localization
add Koraen localization
2024-11-09 13:27:22 +05:30
Abubakar115e
bbbc1523f1 Small spell fixes 2024-11-07 15:32:46 +01:00
hkjang
f61ce71d06 add Koraen localization 2024-11-06 14:32:36 +09:00
Muhammed Nazeem
f803243761
Merge pull request #235 from n4ze3m/next
Bump version to 1.3.3
2024-11-04 13:52:03 +05:30
n4ze3m
fcdfed6e2b Bump version to 1.3.3
Update version number to reflect latest changes.
2024-11-04 13:51:31 +05:30
Muhammed Nazeem
14ee927785
Merge pull request #233 from n4ze3m/next
v1.3.2
2024-11-04 13:44:29 +05:30
n4ze3m
29169f8de1 feat: Add support for LaTeX environments
Support `equation` and `align` environments for LaTeX.

This change enables us to handle more complex LaTeX expressions, including those within environments. By replacing the delimiters for these environments with the appropriate MathJax equivalents, we ensure consistent rendering of mathematical equations within the application.
2024-11-03 23:43:09 +05:30
n4ze3m
55e22ebc48 Fix: Improve LaTeX preprocessing logic
The previous LaTeX preprocessing logic had a bug that could lead to incorrect rendering of mathematical equations. This commit refactors the logic to ensure that both block-level and inline equations are properly handled, improving the accuracy of LaTeX rendering in the application.
2024-11-03 23:39:42 +05:30