Hackster is hosting Hackster Holidays, Finale: Livestream & Giveaway Drawing. Watch previous episodes or stream live on Tuesday!Stream Hackster Holidays, Finale on Tuesday!
NARENDRANKosei MasudaMariam YafaiLim Heng Hoe
Published

LAVApose LLM (Asia Pacific University)

AI model offers personalized medical advice. Adaptable to any language and medical field, with a focus on older adults.

IntermediateFull instructions provided5 days122
LAVApose LLM (Asia Pacific University)

Things used in this project

Hardware components

AIG MI210
×1

Software apps and online services

google colab
AMD ROCm™ Software
AMD ROCm™ Software
gradient AI

Story

Read more

Code

LAVApose medical LLM

Python
Code of fine-tuning the LLM
!pip install gradientai
import pandas as pd
from gradientai import Gradient

csv_file_path = "medical_dialog.csv"
# Load the CSV file
df = pd.read_csv(csv_file_path)

# Extract the "Doctor" and "Patient" columns
df_extracted = df[['Doctor', 'Patient']]

# Format the data into the desired structure using all rows
formatted_data = []
for index, row in df_extracted.iterrows():
    entry = {
        "inputs": f"### Instruction: {row['Patient']}\n\n### Response: {row['Doctor']}"
    }
    formatted_data.append(entry)
    
def chunk_data(data, chunk_size):
    for i in range(0, len(data), chunk_size):
        yield data[i:i + chunk_size]
        
import os
os.environ['GRADIENT_WORKSPACE_ID']='9b5ba3df-2f43-4838-95f4-8aed2f358fe1_workspace'
os.environ['GRADIENT_ACCESS_TOKEN']='FqCi9dPZgDMbMV7hIG59s529c99Uy4KV'




def fine_tune_model(samples, access_token, workspace_id, base_model_slug="nous-hermes2", model_name="HHmodel", epochs=2, chunk_size=50):
    gradient = Gradient(access_token=access_token, workspace_id=workspace_id)
    base_model = gradient.get_base_model(base_model_slug=base_model_slug)
    new_model_adapter = base_model.create_model_adapter(name=model_name)
    print(f"Created model adapter with id {new_model_adapter.id}")

    sample_query = "### Instruction: I am always tired and I cough blood, why? \n\n ### Response:"
    print(f"Asking: {sample_query}")
    # Before Finetuning
    completion = new_model_adapter.complete(query=sample_query, max_generated_token_count=100).generated_output
    print(f"Generated(before fine tuning): {completion}")

    for epoch in range(epochs):
        print(f"Fine tuning the model with iteration {epoch + 1}")
        for chunk in chunk_data(samples, chunk_size):
            new_model_adapter.fine_tune(samples=chunk)

    # After fine tuning
    completion = new_model_adapter.complete(query=sample_query, max_generated_token_count=500).generated_output
    print(f"Generated(after fine tuning): {completion}")

    new_model_adapter.delete()
    gradient.close()

# Run the fine-tuning process
if __name__ == "__main__":
    access_token = os.environ.get('GRADIENT_ACCESS_TOKEN')
    workspace_id = os.environ.get('GRADIENT_WORKSPACE_ID')
    fine_tune_model(samples=formatted_data, access_token=access_token, workspace_id=workspace_id)

Credits

NARENDRAN
20 projects • 22 followers
Kosei Masuda
0 projects • 0 followers
Mariam Yafai
0 projects • 0 followers
Lim Heng Hoe
0 projects • 0 followers
Thanks to gradient AI and AHMED.

Comments