This commit extends the `MessageSource` component to support the `chat` mode in addition to the existing `rag` mode. This allows the component to handle and display messages from both chat and RAG (Rapid Automated Generation) sources.
Add support for LlamaFile, a new model provider that allows users to interact with models stored in LlamaFile format. This includes:
- Adding an icon for LlamaFile in the provider selection menu.
- Updating the model provider selection to include LlamaFile.
- Updating the model handling logic to properly identify and process LlamaFile models.
- Updating the API providers list to include LlamaFile.
This enables users to leverage the capabilities of LlamaFile models within the application.
Adds support for using the currently selected system prompt in the current model settings. This allows users to fine-tune their chat experience based on their preferred prompt.
Adds a new setting to control the maximum number of tokens generated by the model. This provides more control over the length of responses and can be useful for limiting the amount of text generated in certain situations.
This commit introduces a new feature that displays generation information for each message in the chat.
The generation info is displayed in a popover and includes details about the model used, the prompt, and other relevant information. This helps users understand how their messages were generated and troubleshoot any issues that may arise.
The generation info is retrieved from the LLM response and is stored in the database alongside other message details.
This commit also includes translations for the generation info label in all supported languages.
Adds a "Save" button to the edit message form in Playground, allowing users to save changes without immediately submitting them. This also introduces a new `isSend` flag to the `onEditFormSubmit` prop, enabling developers to control whether a message should be sent immediately or saved for later submission. This enhances flexibility and user control during the message editing process.
Adds a new "Download Code" button to the code block component, allowing users to download the code displayed for offline use.
This feature enhances user convenience and provides a more versatile experience for exploring and utilizing code snippets.
Refine the Playground's UI to improve user experience:
- Streamline chat window layout for better message readability
- Introduce a knowledge selection dropdown for easier context setting
- Improve image upload integration for a smoother workflow
- Optimize spacing and styling for a more polished visual appearance
Adds an HTML preview feature to the code block component, allowing users to view the rendered output of their HTML code snippets. This improves the user experience by providing a more interactive and informative way to understand the code.
Add a provider selection dropdown to the OpenAI settings, enabling users to choose from pre-configured options like "Azure" or "Custom." This streamlines setup and allows for more flexibility in configuring OpenAI API endpoints. The dropdown pre-populates base URLs and names based on the selected provider.
The dropdown also automatically populates base URLs and names based on the selected provider, further simplifying the configuration process.
Adds the title of the current chat history to the share modal. This makes it easier for users to share their conversations with others, as the title provides context for the conversation.
The title is fetched from the database and displayed in the share modal.
Adds a new setting that allows users to set a temporary system prompt for the current chat.
This prompt will override the selected system prompt if it exists.
The new setting is available in the "Current Chat Model Settings" modal.
This feature provides a way to quickly experiment with different system prompts without having to change the default setting.
Adds support for OpenAI models, allowing users to leverage various OpenAI models directly from the application. This includes custom OpenAI models and OpenAI-specific configurations for seamless integration.
Adds LaTeX support to the markdown renderer using `rehype-katex` for math equations.
- Replaces block-level LaTeX delimiters `\[ \]` with `$$ $$`.
- Replaces inline LaTeX delimiters `\( \)` with `$ $`.
- Preprocesses the message before rendering to ensure correct delimiters.
This improves the rendering of markdown messages containing mathematical expressions, enhancing the user experience.
Adds a collapsible section to the playground message that displays citations for the response. This is intended to help users better understand the sources used by the model.