An AI Assistant for the command line.
Tuned to assist with developer tasks like finding files, installing packages, and git.
Conversation mode can explain code snippets, generate unit tests, and scaffold new projects.
-
Install Moki:
go install github.com/ztkent/moki/cmd/moki@latest
-
Set your API key as an environment variable:
export OPENAI_API_KEY=<your key> export REPLICATE_API_TOKEN=<your key>
-
Run Moki:
# Ask the assistant a question moki [your question] # Provide additional context cat moki.go | moki [tell me about this code] moki [tell me about this code] -file:moki.go moki [tell me about this project] -url:https://github.com/ztkent/moki # Start a conversation with the assistant moki -c moki -c -m=turbo -max-tokens=100000 -t=0.5
- There are a few options for the API provider:
- OpenAI (https://platform.openai.com/docs/overview)
- Replicate (https://replicate.com/docs)
Flags:
-c: Start a conversation with Moki
-llm: Set the LLM Provider
-m: Set the model to use for the LLM response
-max-tokens: Set the maximum number of tokens to generate
-t: Set the temperature for the LLM response
-d: Show debug logging
Model Options:
- OpenAI:
- [Default] gpt-3.5-turbo, aka: turbo35
- gpt-4-turbo, aka: turbo
- gpt-4o, aka: gpt4o
- Replicate:
- [Default] meta-llama-3-8b, aka: l3-8b
- meta-llama-3-8b-instruct, aka: l3-8b-instruct
- meta-llama-3-70b, aka: l3-70b
- meta-llama-3-70b-instruct, aka: l3-70b-instruct
The assistant can be used in conversation mode.
This allows the assistant to generate more in-depth responses.
moki -c
By default the assistant will use OpenAI. To use another, run the assistant with a flag.
moki -llm=openai
moki -llm=replicate
Depending on the LLM Provider selected, different models are available.
moki -m=turbo
moki -m=m8x7b
moki -m=l3-70b
Tokens cost money.
By default the assistant will limit any conversation to 100k tokens.
moki -max-tokens=100000
The temperature of an LLM response is a measure of randomness.
The value float between 0 and 1. By default the temperature is 0.2
moki -t=0.5