Skip to content
This repository has been archived by the owner on Mar 1, 2024. It is now read-only.

Commit

Permalink
Merge branch 'main' into cogniswitch_llama
Browse files Browse the repository at this point in the history
  • Loading branch information
saiCogniswitch authored Oct 31, 2023
2 parents fadfa79 + 001ec40 commit 539379a
Show file tree
Hide file tree
Showing 12 changed files with 2,171 additions and 38 deletions.
21 changes: 20 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,22 @@
## [v0.0.41] - 2023-10-30

### New Features
- CogniSwitch Connector (#604)

### Smaller Features + Bug Fixes
- add docs to min_chunk_size (#607)
- Change Tavily client name (#609)

## [v0.0.40] - 2023-10-26

### New Features
- Added OpenAlex Reader for Scientific QA (#599)
- Added Tavily Search API as a tool (#600)
- Adding loader to read from OneDrive Personal and OneDrive for Business (#597)

### Smaller Features + Bug Fixes
- Update TrafilaturaWebReader in library.json (#602)

## [v0.0.39] - 2023-10-24

### New Features
Expand Down Expand Up @@ -347,4 +366,4 @@
- None

### Miscellaneous
- None
- None
4 changes: 3 additions & 1 deletion llama_hub/docugami/docugami.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,9 @@
"source": [
"## Load Documents\n",
"\n",
"If the DOCUGAMI_API_KEY environment variable is set, there is no need to pass it in to the loader explicitly otherwise you can pass it in as the `access_token` parameter."
"If the DOCUGAMI_API_KEY environment variable is set, there is no need to pass it in to the loader explicitly otherwise you can pass it in as the `access_token` parameter.\n",
"\n",
"The DocugamiReader has a default minimum chunk size of 32. Chunks smaller than that are appended to subsequent chunks. Set min_chunk_size to 0 to get all structural chunks regardless of size."
]
},
{
Expand Down
2 changes: 0 additions & 2 deletions llama_hub/tools/cogniswitch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,5 +76,3 @@ print(answer_response)
{'data': {'answer': 'CogniSwitch is a technology platform that enhances the reliability of Generative AI applications for enterprises. It does this by gathering and organizing knowledge from documented sources, eliminating hallucinations and bias in AI responses. The platform uses AI to automatically gather and organize knowledge, which can then be reviewed and curated by experts before being published. The CogniSwitch API enables Gen AI applications to access this knowledge as needed, ensuring reliability. It is specifically designed to complement Generative AI and offers customized solutions for different business functions within an enterprise.'}, 'list': None, 'message': None, 'statusCode': 1000}

The tool is designed to store data and retrieve answers based on the knowledge provided. check out the [link](https://github.com/run-llama/llama-hub/blob/main/llama_hub/tools/notebooks/cogniswitch.ipynb) for examples.


2 changes: 1 addition & 1 deletion llama_hub/tools/library.json
Original file line number Diff line number Diff line change
Expand Up @@ -103,7 +103,7 @@
"author": "jerryjliu"
},
"TavilyToolSpec": {
"id": "tools/tavily",
"id": "tools/tavily_research",
"author": "rotemweiss57"
},
"TextToImageToolSpec": {
Expand Down
28 changes: 0 additions & 28 deletions llama_hub/tools/tavily/README.md

This file was deleted.

1 change: 0 additions & 1 deletion llama_hub/tools/tavily/requirements.txt

This file was deleted.

36 changes: 36 additions & 0 deletions llama_hub/tools/tavily_research/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# Tavily Research Tool

[Tavily](https://app.tavily.com/) is a robust research API tailored specifically for LLM Agents. It seamlessly integrates with diverse data sources to ensure a superior, relevant research experience.

To begin, you need to obtain an API key on the [Tavily's developer dashboard](https://app.tavily.com/).

## Why Choose Tavily Research API?

1. **Purpose-Built**: Tailored just for LLM Agents, we ensure our features and results resonate with your unique needs. We take care of all the burden in searching, scraping, filtering and extracting information from online sources. All in a single API call!
2. **Versatility**: Beyond just fetching results, Tavily Research API offers precision. With customizable search depths, domain management, and parsing html content controls, you're in the driver's seat.
3. **Performance**: Committed to rapidity and efficiency, our API guarantees real-time outcomes without sidelining accuracy. Please note that we're just getting started, so performance may vary and improve over time.
4. **Integration-friendly**: We appreciate the essence of adaptability. That's why integrating our API with your existing setup is a breeze. You can choose our Python library or a simple API call or any of our supported partners such as [Langchain](https://python.langchain.com/docs/integrations/tools/tavily_search) and [LLamaIndex](https://llamahub.ai/l/tools-tavily).
5. **Transparent & Informative**: Our detailed documentation ensures you're never left in the dark. From setup basics to nuanced features, we've got you covered.

## Usage

This tool has a more extensive example usage documented in a Jupyter notebook [here](https://github.com/emptycrown/llama-hub/tree/main/llama_hub/tools/notebooks/tavily.ipynb)

Here's an example usage of the TavilyToolSpec.

```python
from llama_hub.tools.tavily_research import TavilyToolSpec
from llama_index.agent import OpenAIAgent

tavily_tool = TavilyToolSpec(
api_key='your-key',
)
agent = OpenAIAgent.from_tools(tavily_tool.to_tool_list())

agent.chat('What happened in the latest Burning Man festival?')
```

`search`: Search for relevant dynamic data based on a query. Returns a list of urls and their relevant content.


This loader is designed to be used as a way to load data as a Tool in an Agent. See [here](https://github.com/emptycrown/llama-hub/tree/main) for examples.
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# init
from llama_hub.tools.tavily.base import (
from llama_hub.tools.tavily_research.base import (
TavilyToolSpec,
)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,9 @@ class TavilyToolSpec(BaseToolSpec):

def __init__(self, api_key: str) -> None:
"""Initialize with parameters."""
from tavily import Client
from tavily import TavilyClient

self.client = Client(api_key=api_key)
self.client = TavilyClient(api_key=api_key)

def search(self, query: str, max_results: Optional[int] = 6) -> List[Document]:
"""
Expand Down
1 change: 1 addition & 0 deletions llama_hub/tools/tavily_research/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
tavily-python>=0.2.3
Loading

0 comments on commit 539379a

Please sign in to comment.