Edit model card

You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Accessible and portable generative AI solutions for developers and businesses.

Description

"Bella-2-8b" by Cognitivess is a text generation model tailored for empathic AI interactions, supporting both English and Romanian languages. The model, built on the transformers architecture, features 8.03 billion parameters , well-suited for a variety of text generation tasks, including question answering, summarization, reasoning, dialogue, sentiment analysis. It employs a floating-point 16 (BF16) tensor type for operations, facilitating speech-to-speech applications. Licensed under Cognitivess AI, Bella-2-8b is available on the Hugging Face platform for wide accessibility.

Intended use

Bella-2-8B is a multilingual chat model designed to support a variety of languages including English, Romanian, Spanish, French, German, and many more, intended for diverse language applications.

Model Developer: Cognitivess AI

Model Dates: Bella-2-8b was trained between May 2024 and June 2024.

Data Freshness: The pretraining data has a cutoff of June 2024. Training will continue beyond the current data cutoff date to incorporate new data as it becomes available.

Model Architecture:

Bella-2-8B model architecture is Transformer-based and trained with a sequence length of 8192 tokens.

Architecture Type: Transformer (auto-regressive language model)

Try this model on bella.cognitivess.com now.

image/png

Usage


from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_path = "CognitivessAI/bella-2-8b"

# Load the tokenizer and model, converting model to half precision
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(model_path).half().eval()

# Move the model to CUDA if available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)

# Prompt content: "hi"
messages = [
    {"role": "user", "content": "Who are you?"}
]

input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')

# Move input_ids to the same device as the model
input_ids = input_ids.to(device)

# Adjust the generate method to set max_new_tokens
output_ids = model.generate(input_ids, max_new_tokens=50)

response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)

# Model response: "I'm Bella, an AI model developed by Cognitivess."
print(response)

Contact: [email protected]

Downloads last month
0
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for CognitivessAI/bella-2-8b

Quantizations
3 models