Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added hugging face, gpt and audio blocks #183

Merged
merged 2 commits into from
Aug 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions frontend/core/blocks/audio-player/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
FROM python:3.9-slim

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY computations.py .

2 changes: 2 additions & 0 deletions frontend/core/blocks/audio-player/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
# block-view-audios
A block that generates an HTML file with an audio player per input file.
79 changes: 79 additions & 0 deletions frontend/core/blocks/audio-player/computations.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
import uuid
import shutil
import os


def compute(audio_paths):
"""Generates an HTML file with an audio player per input file.

Inputs:
audio_paths (list): A list of audio paths or a single audio path.

Outputs:
dict: A dictionary with the key 'html' and the value being the name of the generated HTML file.
"""

if isinstance(audio_paths, str):
audio_paths = [audio_paths]

css_style = """
<style>
body {
display: flex;
flex-direction: column;
justify-content: center;
align-items: center;
height: 100vh;
margin: 0;
background-color: #f0f0f0;
font-family: 'Arial', sans-serif;
}
audio {
width: 60%; /* Control the width of the audio player */
margin: 10px 0; /* Add vertical spacing between players */
box-shadow: 0 4px 6px rgba(0,0,0,0.1); /* Subtle shadow for 3D effect */
border-radius: 10px; /* Rounded corners for the player */
}
h3 {
margin: 20px 0 0; /* Spacing above the file name */
color: #333;
font-size: 16px;
}
</style>
"""

audio_controls = ""
for path in audio_paths:
filename = path.split('/')[-1]
audio_controls += f'<h3>{filename}</h3>\n<audio controls src="{path}" preload="none">Your browser does not support the audio element.</audio>\n'

html_template = f"""
<html>
<head>
<title>Audio Player</title>
{css_style}
</head>
<body>
{audio_controls}
</body>
</html>
"""
unique_id = str(uuid.uuid4())

html_path = f"viz_{unique_id}.html"
html_code = html_template

for path in audio_paths:
shutil.move(os.path.abspath(path), f"{path}")

with open(html_path, "w") as file:
file.write(html_code)

return {"html": f"viz_{unique_id}.html"}



def test():
"""Test the compute function."""

print("Running test")
Binary file added frontend/core/blocks/audio-player/cover-image.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Empty file.
54 changes: 54 additions & 0 deletions frontend/core/blocks/audio-player/specs.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
{
"information": {
"id": "audio-player",
"name": "audio-player",
"description": "Generates an HTML file with an audio player per input file.",
"block_version": "block version number",
"block_source": "core/blocks/audio-player",
"block_type": "view",
"system_versions": [
"0.1"
]
},
"inputs": {
"audio_paths": {
"type": "Any",
"connections": [],
"relays": []
}
},
"outputs": {
"html": {
"type": "Any",
"connections": [],
"relays": []
}
},
"action": {
"container": {
"image": "audio-player",
"version": "latest",
"command_line": [
"python",
"entrypoint.py"
]
}
},
"views": {
"mode": "modal",
"node": {
"active": "True",
"title_bar": {
"background_color": "#D55908"
},
"preview": {
"active": "true"
},
"html": "",
"pos_x": "300",
"pos_y": "200",
"pos_z": "999, this is the z-index for 2D canvas"
}
},
"events": {}
}
9 changes: 9 additions & 0 deletions frontend/core/blocks/gpt-storyteller/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
FROM python:3.9-slim

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY computations.py .
18 changes: 18 additions & 0 deletions frontend/core/blocks/gpt-storyteller/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# block-gpt-storyteller
A block that writes a moral story about the given prompt. The story will be divided into 6 panels and returned as a JSON object containing the initial prompt and the generated story.

This block can be used in ZetaForge, an open-source platform enabling developers to design, build, and deploy AI pipelines quickly.
Check out ZetaForge on GitHub: https://github.com/zetane/zetaforge.

To use this block, provide your OpenAI API key and a short description of a moral you want to teach to a child. This block takes that information and writes a nice story that teaches that moral to children.
Input:
- `api_key`: Your OpenAI API key.
- `story_description`: The description for the moral story you want to generate.

Output:
- `story`: A JSON object containing the initial prompt and the generated story, paginated into 6 pages.

You can attach this block to a Text Viewer block (available on ZetaForge Block Library) to view the generated story, as shown below:


![Screenshot 2024-05-28 at 10 18 42 AM](https://github.com/zetane/block-gpt-storyteller/assets/97202788/4d0f295f-6970-4dc8-bac9-11d5d1615952)
96 changes: 96 additions & 0 deletions frontend/core/blocks/gpt-storyteller/computations.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
from openai import OpenAI
import json


def tell_story(sentence, api_key):
client = OpenAI(api_key=api_key)
completion = client.chat.completions.create(
model="gpt-4-turbo",
messages=[
{
"role": "user",

"content": """
You are an educator and writer who teaches children good morals by incorporating those morals and good
habits into stories in illustrated books. You are given an input that is a moral or good habit that a parent
wants to teach their children. Create a short and simple story with at least two characters that will teach that moral to the children.
The story should come into six panels that are easy to illustrate. When referring to a character for the
first time in the story, describe how they look. No two panels should be exactly the same. Keep the story simple
to follow and illustrate. Format your story according to the following example:
input prompt: I want to teach my son to eat more carrots because they are healthy.
your output:
'''
1) Panel one
Once upon a time, there was a beautiful princess with long golden hair and brown eyes, named Diana. Diana
had a dear friend who was not like other people at the castle; he was a dragon! His name was Mushu. He had
bright green flakes and large purple wings that spread wide. Diana and Mushu played in the castle's garden
and always had fun when they were together.

2) Panel two
Mushu loved carrots. So, Diana took Mushu to pick up carrots from the garden every day. They would run
in the carrot garden until they were thirsty. Then, they drank some water from the fontain and ate some
carrots together to power up and go back to their game.

3) Panel three
But things changed one day when they went to play in the garden. Although Diana was so energetic and ready
to run and play more and more, Mushu was very tired from the beginning and couldn't play with Diana. Mushu
and Diana were both sad that Mushu couldn't run in the garden like every day.

4) Panel four
Diana was looking for a way to make Mushu strong again so they can play together. She went into the
garden and picked some fresh orange carrots for Mushu by herself. Diana brought a basket full of carrots
to Mushu.

5) Panel five
Mushu started munching on the carrots. As he ate more carrots, he felt stronger, happier, and healthier.
When Mushu finished eating the carrots Diana brought, he started laughing again and wanted to run in the
garden and play with Diana like every day!

6) Panel six
Diana and Mushu learned together that carrots are healthy and full of nutrients that are good for both
humans and dragons. Eating carrots helped them play longer in the beautiful garden and stay full of joy!
'''
After you wrote your story, reformat it in the json format according to this example:
{
"prompt": "I want to teach my son to eat more carrots because they are healthy.",
"response":
{
"page1": {"text": Once upon a time, there was a beautiful...},
"page2": {"text": ...},
...
"page6": {"text": ...}
}
}

Don't include panel titles, such as "1) Panel one", in the output dictionary. Output only the dictionary with no explanations so that it is convertable to a json object as is.
here is the input prompt:
"""
+ sentence,
}
],
temperature=0.0,
response_format = {"type": "json_object"}
)
return json.loads(completion.choices[0].message.content)


def compute(story_description, api_key):
"""
Generates a story based on a given prompt.

Args:
story_description (str): A description or prompt for the story to be generated.
api_key (str): API key required for accessing the story generation service.

Returns:
dict: A dictionary containing the generated story under the key "story".
"""
story = tell_story(story_description, api_key)
return {"story": story}



def test():
"""Test the compute function."""

print("Running test")
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions frontend/core/blocks/gpt-storyteller/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
openai==1.25.0
58 changes: 58 additions & 0 deletions frontend/core/blocks/gpt-storyteller/specs.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
{
"information": {
"id": "gpt-storyteller",
"name": "GPT Story Teller",
"description": "Generates a story based on a given prompt.",
"system_versions": [
"0.1"
],
"block_version": "block version number",
"block_source": "core/blocks/gpt-storyteller",
"block_type": "compute"
},
"inputs": {
"api_key": {
"type": "Any",
"connections": [],
"relays": []
},
"story_description": {
"type": "Any",
"connections": [],
"relays": []
}
},
"outputs": {
"story": {
"type": "Any",
"connections": [],
"relays": []
}
},
"action": {
"container": {
"image": "gpt-storyteller",
"version": "latest",
"command_line": [
"python",
"entrypoint.py"
]
}
},
"views": {
"node": {
"behavior": "modal",
"active": "True",
"title_bar": {
},
"preview": {
"active": "false"
},
"html": "",
"pos_x": "300",
"pos_y": "200",
"pos_z": "999"
}
},
"events": {}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
FROM python:3.9-slim

WORKDIR /app

COPY requirements.txt .

RUN pip install --no-cache-dir -r requirements.txt

COPY computations.py .
22 changes: 22 additions & 0 deletions frontend/core/blocks/hf-image-classification-inference/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# block-hf-image-classification-inference

A block that runs image classification inference using dataset and model pairs on Hugging Face.

This block can be used in ZetaForge, an open-source platform enabling developers to design, build, and deploy AI pipelines quickly. Check out ZetaForge on GitHub: https://github.com/zetane/zetaforge.

This block takes in four parameters:
1) `dataset_name`: Name of the dataset on Hugging Face.
2) `split_name`: The dataset split to run the inference on.
3) `model_name`: Name of the model on Hugging Face.
4) `n_samples`: Number of samples to run inference.

And returns the following as output:
1) `images`: List of paths to images that were used for inference.
2) `predictions`: List of prediction labels for each inference image.

To use this block for a dataset and model pair on Hugging Face, browse [https://huggingface.co/datasets](https://huggingface.co/datasets) to find the dataset you want to run inference on and note the dataset name. Then, by referring to the dataset card, find the dataset split you are interested in, e.g., `test`. To choose a model trained on the selected dataset, select a model from the right-hand side menu on the dataset page (Models trained or fine-tuned on `$DATASET_NAME$`...). Note that this block is only designed for dataset and model pairs that support the image classification task. Please refer to the image below to learn more about how to use Hugging Face to find a dataset and model pair:

![huggingface-guide](https://github.com/zetane/block-hf-image-classification-inference/assets/97202788/60f327bb-6989-4065-bce2-6dd0ccf67f12)

To view the output, you can connect this block to the View Images core block, or create your own image viewer block that supports captions as well to visualize both the images and the predictions, like the example screenshot below:
![hf-image-classification-example](https://github.com/zetane/block-hf-image-classification-inference/assets/97202788/337b78f0-235a-4e1e-8d8b-e94a230fbeea)
Loading
Loading