Browse Models
The simplest way to self-host Pygmalion 2 7B. Launch a dedicated cloud GPU server running Lab Station OS to download and serve the model using any compatible app or framework.
Download model weights for local inference. Must be used with a compatible app, notebook, or codebase. May run slowly, or not work at all, depending on your system resources, particularly GPU(s) and available VRAM.
Pygmalion 2 7B is a conversational AI model built on Llama-2 7B, optimized for creative writing and interactive dialogue. It features a three-role token system (system/user/model) for structured conversations and was trained on curated storytelling and dialogue datasets using the Axolotl framework.
Pygmalion-2 7B represents a significant evolution in conversational AI, built upon Meta AI's Llama-2 7B architecture. This instruction-tuned language model specializes in conversation, role-playing, and creative writing tasks, incorporating innovative prompting mechanisms and specialized training data.
The model's foundation rests on the Llama-2 7B architecture, which has been extensively fine-tuned using a diverse collection of data sources. The training process incorporated standard instruction data alongside specialized content from role-playing forums, fictional stories, and conversations. A key component of the training data was the PIPPA dataset, developed specifically for this purpose. The model's development utilized the Axolotl framework, ensuring efficient training and optimization.
One of Pygmalion-2 7B's most distinctive features is its sophisticated prompting format, which represents an improvement over its predecessor. The model implements a three-role token system:
<|system|>
: Used for providing background information and context<|user|>
: Designates user input and queries<|model|>
: Marks the model's generated responsesThis token system enables complex interactions and maintains coherent conversation history, making it particularly effective for creative writing and role-playing applications. The model excels at generating contextually appropriate responses while maintaining character consistency throughout extended interactions.
It's important to note that Pygmalion-2 7B was developed primarily for fictional writing and entertainment purposes. The model has not undergone safety fine-tuning, which means it may generate content that could be considered offensive or inaccurate. Users should exercise appropriate caution and judgment when implementing the model.
The model operates under the Llama-2 license, which permits both commercial and non-commercial applications. This licensing framework provides flexibility for developers and researchers while maintaining certain usage guidelines and restrictions.