feat(plugin-ai): Adding command 'ai:models:call' #21
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR implements the
ai:models:call
command with tests somewhat according to the design outlined in the UX design doc, the MIA API reference doc and discussions around the behavior for that command in the following Slack threads:ai:models:call
for LLM (chat completion) models behavior.ai:models:call
for image generation models behavior.ai:models:call
for embeddings generation models behavior.How to test
Prepare your environment for testing
yarn && yarn build
.HEROKU_INFERENCE_ADDON
to the production canary add-on viaexport HEROKU_INFERENCE_ADDON="inference-staging"
HEROKU_INFERENCE_HOST
to the production canary add-on default host viaexport HEROKU_INFERENCE_HOST="staging.inference.herokai.com"
heroku apps:create test-cli-plugin-ai
Actual testing
./bin/run ai:models:create claude-3-sonnet -a test-cli-plugin-ai --as completion
./bin/run ai:models:create stable-diffusion-xl -a test-cli-plugin-ai --as image
./bin/run ai:models:create cohere-embed-multilingual -a test-cli-plugin-ai --as embeddings
./bin/run ai:models:call --help
. Keep in mind that more option flags were added where appropriate, and it will differ from the design doc../bin/run ai:models:call completion -a test-cli-plugin-ai -p "<your prompt here>"
. The command should succeed and get a response for your prompt.--json
and--output
option. When using--json
the full JSON response should be displayed or redirected to the output file when using--output
../bin/run ai:models:call image -a test-cli-plugin-ai -p "<your prompt here>" --opts='{"response_format":"base64"}'
. The command should succeed and a Base64-encoded image content should be written to the standard outpu../bin/run ai:models:call image -a test-cli-plugin-ai -p "<your prompt here>" --opts='{"response_format":"base64"}' --output=/tmp/output-image.png
. Verify that the command succeeds and when you runopen /tmp/output-image.png
you should see the generated image../bin/run ai:models:call image -a test-cli-plugin-ai -p "<your prompt here>" --opts='{"response_format":"base64"}' --output=/tmp/output-image.json
. Verify that the command succeeds and when you runopen /tmp/output-image.json
you should see the full JSON response../bin/run ai:models:call embeddings -a test-cli-plugin-ai -p "<your prompt here>"
. The command should succeed and a comma-separated list of float numbers (embedding vector) should be displayed.--json
and--output
option. When using--json
the full JSON response should be displayed or redirected to the output file when using--output
.heroku apps:destroy -a test-cli-plugin-ai
SOC2 Compliance
GUS Work Item