Skip to content

bniladridas/dragon-ai-assistant

Repository files navigation

🐉 Dragon AI

Preview 1

Play Demo Documentation GitHub Deploy with Vercel

Version License Python Flask Gemini Vercel Uptime

A powerful AI chat interface powered by Google's Gemini 1.5 Flash model
Built with Flask 3, Vercel, and modern web technologies 🚀

DemoDocumentationQuick StartDeploymentContributing

🎮 Quick Start

Try Dragon AI instantly:

  1. 🌐 Visit https://dragon-ai-assistant.vercel.app/
  2. 🔒 No signup required
  3. 💬 Start chatting with the AI

Or run locally:

# Clone repository
git clone https://github.com/bniladridas/dragon-ai-assistant.git
cd dragon-ai-assistant

# Install Vercel CLI
npm i -g vercel

# Install dependencies
pip install -r requirements.txt

# Run development server
vercel dev

Your Flask application will be available at http://localhost:3000.

✨ What's New in 2.0.0

Major Updates

  • 🎨 New Twitter/X-inspired dark theme UI
  • 🚀 Vercel Edge Functions integration
  • 🤖 Upgraded to Gemini 1.5 Flash
  • 📱 Enhanced mobile responsiveness
  • 🔄 Real-time chat synchronization
  • 🎨 Custom SVG animations
  • 🌐 Global CDN distribution
  • ⚡ Flask 3.0 support

Performance Improvements

  • ⚡ 50% faster response times
  • 📦 Reduced bundle size
  • 🔧 Optimized API calls
  • 🌍 Enhanced global latency

☁️ Dragon AI Cloud Infrastructure

flowchart TB
    subgraph Client
        B[Browser] --> V[Vercel Edge Network]
    end
    
    subgraph Cloud["Cloud Infrastructure"]
        V --> VF[Vercel Frontend]
        V --> PS[Python Serverless Function]
        PS --> G[Google Gemini API]
        
        subgraph Vercel["Vercel Platform"]
            VF --> |Static Assets| VC[Vercel CDN]
            PS --> |Environment Variables| VE[Vercel ENV]
        end
    end
    
    subgraph Storage
        B --> |Chat History| LS[Local Storage]
    end
    
    style B fill:#1D9BF0,color:#fff
    style V fill:#000000,color:#fff
    style VF fill:#0070F3,color:#fff
    style PS fill:#16181C,color:#fff
    style G fill:#4285F4,color:#fff
    style VC fill:#0070F3,color:#fff
    style VE fill:#0070F3,color:#fff
    style LS fill:#1DA1F2,color:#fff
Loading

🏗 How it Works

Dragon AI uses the Web Server Gateway Interface (WSGI) with Flask 3 to enable handling requests on Vercel with Serverless Functions. The architecture is designed as follows:

flowchart TB
    U[User Interface] --> |HTTP Request| F[Flask 3 Server]
    F --> |Prompt| G[Gemini 1.5 Flash API]
    G --> |Generated Response| F
    F --> |JSON Response| U
    
    subgraph Frontend
    U --> |Store| LC[Local Storage]
    LC --> |Load| U
    end
    
    subgraph Cloud Infrastructure
    F --> |Vercel Edge| V[Vercel Platform]
    V --> |CDN| C[Global CDN]
    end
Loading

🚀 Features

Core Features

  • 💬 Real-time AI chat interface
  • 📝 Markdown & code syntax highlighting
  • 💾 Local chat history
  • 📱 Responsive design
  • 🌙 Dark mode
  • ⚡ Fast response times
  • 🔒 Secure API handling

Technical Features

  • 🔄 WebSocket support
  • 📦 Efficient bundling
  • 🌐 Edge network distribution
  • 🔧 Environment management
  • 📊 Performance monitoring
  • 🔍 SEO optimization
  • 🚀 CI/CD pipeline

🚀 Deployment

Vercel Configuration

{
  "version": 2,
  "builds": [
    {
      "src": "app.py",
      "use": "@vercel/python"
    }
  ],
  "routes": [
    {
      "src": "/(.*)",
      "dest": "app.py"
    }
  ]
}

One-Click Deploy

Deploy the example using Vercel:

Deploy with Vercel

Performance Metrics

  • TTFB: ~100ms
  • FCP: ~300ms
  • LCP: ~800ms
  • TTI: ~1.2s

💻 Development

Prerequisites

  • Python 3.8+
  • Flask
  • Google Cloud API key
  • Vercel account

Environment Variables

GOOGLE_API_KEY=your_api_key_here
VERCEL_ENV=development
DEBUG=True

📚 API Documentation

Generate Response

POST /generate
Content-Type: application/json

{
  "prompt": "string",
  "chatId": "number"
}

Response Format

{
  "response": "string",
  "metadata": {
    "model": "gemini-1.5-flash",
    "timestamp": "string",
    "processTime": "number"
  }
}

🔒 Security & Monitoring

Security Features

  • 🔐 API key encryption
  • 🛡️ Rate limiting
  • 🔍 Input validation
  • 🚫 XSS protection
  • 📝 Security logs

Monitoring Features

  • 📈 Real-time metrics
  • 🔍 Error tracking
  • 📊 Usage analytics
  • ⚡ Performance monitoring
  • 🌡️ Health checks

🤝 Contributing

  1. Fork repository
  2. Create feature branch
  3. Commit changes
  4. Push to branch
  5. Create Pull Request

📜 Code of Conduct

View our Code of Conduct

📄 License

MIT License - View License

🙏 Acknowledgments

  • Google AI Team
  • Vercel Platform
  • Open Source Community

📞 Support & Status

Support Channels

Status & Updates


Made with ❤️ by Dragon Team | Powered by Vercel
© 2024 Dragon AI. All rights reserved.