feat:Add 3 new LLM providers and optimize the readme.

This commit is contained in:
2026-01-30 16:47:19 +08:00
parent f610c0af8b
commit 2a57946421
13 changed files with 1483 additions and 377 deletions

314
README.md
View File

@@ -1,49 +1,40 @@
# QuiCommit
A powerful AI-powered Git assistant for generating conventional commits, tags, and changelogs. Manage multiple Git profiles for different work contexts seamlessly.
A powerful AI-powered Git assistant for generating conventional commits, tags, and changelogs. Manage multiple Git profiles for different work contexts.
![Rust](https://img.shields.io/badge/rust-%23000000.svg?style=for-the-badge&logo=rust&logoColor=white)
![License](https://img.shields.io/badge/license-MIT%2FApache--2.0-blue.svg)
![License](https://img.shields.io/badge/license-MIT-blue.svg)
## Features
- 🤖 **AI-Powered Generation**: Generate commit messages, tag annotations, and changelog entries using LLM APIs (OpenAI, Anthropic) or local Ollama models
- 📝 **Conventional Commits**: Full support for Conventional Commits and @commitlint formats
- 👤 **Profile Management**: Save and switch between multiple Git profiles (user info, SSH keys, GPG signing)
- 🏷️ **Smart Tagging**: Semantic version bumping with auto-generated release notes
- 📜 **Changelog Generation**: Automatic changelog generation in Keep a Changelog or GitHub Releases format
- 🔐 **Security**: Encrypt sensitive data like SSH passphrases and API keys
- 🎨 **Interactive UI**: Beautiful CLI with interactive prompts and previews
- **AI-Powered Generation**: Generate commits, tags, and changelogs using LLM APIs (Ollama, OpenAI, Anthropic, Kimi, DeepSeek, OpenRouter) or local models
- **Conventional Commits**: Full support for Conventional Commits and commitlint formats
- **Profile Management**: Manage multiple Git identities with SSH keys and GPG signing support
- **Smart Tagging**: Semantic version bumping with AI-generated release notes
- **Changelog Generation**: Automatic changelog generation in Keep a Changelog format
- **Security**: Encrypt sensitive data
- **Interactive UI**: Beautiful CLI with previews and confirmations
## Installation
### From Source
```bash
git clone https://github.com/yourusername/quicommit.git
cd quicommit
cargo build --release
cargo install --path .
```
The binary will be available at `target/release/quicommit`.
### Prerequisites
- Rust 1.70 or later
- Git 2.0 or later
- For AI features: Ollama (local) or API keys for OpenAI/Anthropic
Requirements: Rust 1.70+, Git 2.0+
## Quick Start
### 1. Initialize Configuration
### Initialize
```bash
quicommit init
```
This will guide you through setting up your first profile and LLM configuration.
### 2. Generate a Commit
### Generate Commit
```bash
# AI-generated commit (default)
@@ -52,27 +43,27 @@ quicommit commit
# Manual commit
quicommit commit --manual -t feat -m "add new feature"
# Date-based commit
quicommit commit --date
# Stage all and commit
quicommit commit -a
# Skip confirmation
quicommit commit --yes
```
### 3. Create a Tag
### Create Tag
```bash
# Auto-detect version bump from commits
# Auto-detect version bump
quicommit tag
# Bump specific version
# Specify bump type
quicommit tag --bump minor
# Custom tag name
quicommit tag -n v1.0.0
```
### 4. Generate Changelog
### Generate Changelog
```bash
# Generate for unreleased changes
@@ -80,19 +71,12 @@ quicommit changelog
# Generate for specific version
quicommit changelog -v 1.0.0
# Initialize new changelog
quicommit changelog --init
```
## Configuration
### Profiles
Manage multiple Git identities for different contexts:
### Manage Profiles
```bash
# Add a new profile
# Add new profile
quicommit profile add
# List profiles
@@ -101,133 +85,86 @@ quicommit profile list
# Switch profile
quicommit profile switch
# Apply profile to current repo
quicommit profile apply
# Set profile for current repo
quicommit profile set-repo personal
```
### LLM Providers
#### Ollama (Local - Recommended)
### Configure LLM
```bash
# Configure Ollama
# Configure Ollama (local)
quicommit config set-llm ollama
# Or with specific settings
quicommit config set-ollama --url http://localhost:11434 --model llama2
```
#### OpenAI
```bash
# Configure OpenAI
quicommit config set-llm openai
quicommit config set-openai-key YOUR_API_KEY
```
#### Anthropic Claude
```bash
# Configure Anthropic Claude
quicommit config set-llm anthropic
quicommit config set-anthropic-key YOUR_API_KEY
# Configure Kimi (Moonshot AI)
quicommit config set-llm kimi
quicommit config set-kimi-key YOUR_API_KEY
# Configure DeepSeek
quicommit config set-llm deepseek
quicommit config set-deepseek-key YOUR_API_KEY
# Configure OpenRouter
quicommit config set-llm openrouter
quicommit config set-openrouter-key YOUR_API_KEY
# Test LLM connection
quicommit config test-llm
```
### Commit Format
## Command Reference
```bash
# Use conventional commits (default)
quicommit config set-commit-format conventional
| Command | Alias | Description |
|---------|-------|-------------|
| `quicommit init` | `i` | Initialize configuration |
| `quicommit commit` | `c` | Generate and create commit |
| `quicommit tag` | `t` | Generate and create tag |
| `quicommit changelog` | `cl` | Generate changelog |
| `quicommit profile` | `p` | Manage Git profiles |
| `quicommit config` | `cfg` | Manage settings |
# Use commitlint format
quicommit config set-commit-format commitlint
```
### Commit Options
## Usage Examples
| Option | Description |
|--------|-------------|
| `-t, --commit-type` | Commit type (feat, fix, etc.) |
| `-s, --scope` | Commit scope |
| `-m, --message` | Commit description |
| `--body` | Commit body |
| `--breaking` | Mark as breaking change |
| `--manual` | Manual input, skip AI |
| `-a, --all` | Stage all changes |
| `-S, --sign` | GPG sign commit |
| `--amend` | Amend previous commit |
| `--dry-run` | Show without committing |
| `-y, --yes` | Skip confirmation |
### Interactive Commit Flow
### Tag Options
```bash
$ quicommit commit
Staged files (3):
• src/main.rs
• src/lib.rs
• Cargo.toml
🤖 AI is analyzing your changes...
────────────────────────────────────────────────────────────
Generated commit message:
────────────────────────────────────────────────────────────
feat: add user authentication module
Implement OAuth2 authentication with support for GitHub
and Google providers.
────────────────────────────────────────────────────────────
What would you like to do?
> ✓ Accept and commit
🔄 Regenerate
✏️ Edit
❌ Cancel
```
### Profile Management
```bash
# Create work profile
$ quicommit profile add
Profile name: work
Git user name: John Doe
Git user email: john@company.com
Is this a work profile? yes
Organization: Acme Corp
# Set for current repository
$ quicommit profile set-repo work
✓ Set 'work' for current repository
```
### Smart Tagging
```bash
$ quicommit tag
Latest version: v0.1.0
Version selection:
> Auto-detect bump from commits
Bump major version
Bump minor version
Bump patch version
🤖 AI is generating tag message from 15 commits...
Tag preview:
Name: v0.2.0
Message:
## What's Changed
### 🚀 Features
- Add user authentication
- Implement dashboard
### 🐛 Bug Fixes
- Fix login redirect issue
Create this tag? yes
✓ Created tag v0.2.0
```
| Option | Description |
|--------|-------------|
| `-n, --name` | Tag name |
| `-b, --bump` | Version bump (major/minor/patch) |
| `-m, --message` | Tag message |
| `-g, --generate` | AI-generate message |
| `-S, --sign` | GPG sign tag |
| `--lightweight` | Create lightweight tag |
| `--push` | Push to remote |
| `-y, --yes` | Skip confirmation |
## Configuration File
Configuration is stored at:
- **Linux**: `~/.config/quicommit/config.toml`
- **macOS**: `~/Library/Application Support/quicommit/config.toml`
- **Windows**: `%APPDATA%\quicommit\config.toml`
Example configuration:
Location:
- Linux/macOS: `~/.config/quicommit/config.toml`
- Windows: `%APPDATA%\quicommit\config.toml`
```toml
version = "1"
@@ -249,11 +186,16 @@ organization = "Acme Corp"
provider = "ollama"
max_tokens = 500
temperature = 0.7
timeout = 30
[llm.ollama]
url = "http://localhost:11434"
model = "llama2"
[llm.openai]
model = "gpt-4"
base_url = "https://api.openai.com/v1"
[commit]
format = "conventional"
auto_generate = true
@@ -262,7 +204,6 @@ max_subject_length = 100
[tag]
version_prefix = "v"
auto_generate = true
include_changelog = true
[changelog]
path = "CHANGELOG.md"
@@ -274,62 +215,81 @@ group_by_type = true
| Variable | Description |
|----------|-------------|
| `QUICOMMIT_CONFIG` | Path to configuration file |
| `EDITOR` | Default editor for interactive input |
| `QUICOMMIT_CONFIG` | Configuration file path |
| `EDITOR` | Default editor |
| `NO_COLOR` | Disable colored output |
## Shell Completions
### Bash
```bash
quicommit completions bash > /etc/bash_completion.d/quicommit
```
### Zsh
```bash
quicommit completions zsh > /usr/local/share/zsh/site-functions/_quicommit
```
### Fish
```bash
quicommit completions fish > ~/.config/fish/completions/quicommit.fish
```
## Troubleshooting
### LLM Connection Issues
```bash
# View current configuration
quicommit config list
# Test LLM connection
quicommit config test-llm
# List available models
quicommit config list-models
```
### Git Operations
```bash
# Check current profile
quicommit profile show
# Apply profile to fix git config
quicommit profile apply
# Edit configuration
quicommit config edit
```
## Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Contributions are welcome! Please follow these steps:
### Submit a Pull Request
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Commit your changes (`git commit -m 'feat: add amazing feature'`)
4. Push to the branch (`git push origin feature/amazing-feature`)
2. Create a feature branch: `git checkout -b feature/your-feature`
3. Commit changes: `git commit -m 'feat: add new feature'`
4. Push branch: `git push origin feature/your-feature`
5. Open a Pull Request
### Development Setup
```bash
# Clone repository
git clone https://github.com/YOUR_USERNAME/quicommit.git
cd quicommit
# Fetch dependencies
cargo fetch
# Run in development mode
cargo run -- commit --help
# Run tests
cargo test
# Code quality checks
cargo clippy
cargo fmt --check
```
### Code Standards
- Follow Rust formatting (run `cargo fmt`)
- Use Conventional Commits for commit messages
- Add tests for new features
- Ensure `cargo clippy` passes with no warnings
### Project Structure
```
src/
├── commands/ # CLI command implementations
├── config/ # Configuration management
├── generator/ # AI content generation
├── git/ # Git operations
├── llm/ # LLM provider implementations
└── utils/ # Utility functions
```
## License
This project is licensed under the MIT OR Apache-2.0 license.
MIT License
## Acknowledgments

View File

@@ -1,4 +1,4 @@
# QuicCommit Configuration Example
# QuiCommit Configuration Example
# Copy this file to your config directory and modify as needed:
# - Linux: ~/.config/quicommit/config.toml
# - macOS: ~/Library/Application Support/quicommit/config.toml

View File

@@ -1,49 +1,42 @@
# QuiCommit
一款强大的AI驱动的Git助手用于生成规范化的提交信息、标签和变更日志并支持为不同工作场景管理多个Git配置。
![Rust](https://img.shields.io/badge/rust-%23000000.svg?style=for-the-badge&logo=rust&logoColor=white)
一款强大的AI驱动的Git助手用于生成规范化的提交信息、标签和变更日志并支持管理多个Git配置。
![Rust](https://img.shields.io/badge/rust-%23000000.svg?logo=rust&logoColor=white)
![LICENSE](https://img.shields.io/badge/license-MIT-blue.svg)
## 主要功能
## 主要功能
- 🤖 **AI智能生成**使用LLM APIOpenAI、Anthropic或本地Ollama模型生成提交信息、标签注释和变更日志
- 📝 **规范化提交**:全面支持Conventional Commits和@commitlint格式规范
- 👤 **多配置管理**:为不同工作场景保存和切换多个Git配置用户信息、SSH密钥GPG签名
- 🏷️ **智能标签管理**:基于语义版本的智能版本升级,自动生成发布说明
- 📜 **变更日志生成**自动生成Keep a Changelog或GitHub Releases格式的变更日志
- 🔐 **安全保护**:加密存储敏感数据如SSH密码和API密钥
- 🎨 **交互式界面**美观的CLI界面支持交互式提示和预览功能
- **AI智能生成**使用LLM APIOllama本地、OpenAI、Anthropic Claude、Kimi、DeepSeek、OpenRouter生成提交信息、标签和变更日志
- **规范化提交**支持Conventional Commits和commitlint格式规范
- **多配置管理**:为不同场景管理多个Git身份支持SSH密钥GPG签名配置
- **智能标签管理**:基于语义版本自动检测升级AI生成标签信息
- **变更日志生成**自动生成Keep a Changelog格式的变更日志
- **安全保护**:加密存储敏感数据
- **交互式界面**美观的CLI界面支持预览和确认
## 📦 安装说明
### 从源码安装
## 安装
```bash
git clone https://github.com/yourusername/quicommit.git
cd quicommit
cargo build --release
cargo install --path .
```
可执行文件将位于 `target/release/quicommit`
要求Rust 1.70+、Git 2.0+
### 环境要求
## 快速开始
- Rust 1.70或更高版本
- Git 2.0或更高版本
- AI功能可选Ollama本地或OpenAI/Anthropic的API密钥
## 🚀 快速开始
### 1. 初始化配置
### 初始化配置
```bash
quicommit init
```
这将引导您完成第一个配置和LLM配置的设置。
### 2. 生成提交信息
### 生成提交信息
```bash
# AI生成提交信息默认
@@ -52,44 +45,37 @@ quicommit commit
# 手动提交
quicommit commit --manual -t feat -m "添加新功能"
# 基于日期的提交
quicommit commit --date
# 暂存所有文件并提交
quicommit commit -a
# 跳过确认直接提交
quicommit commit --yes
```
### 3. 创建标签
### 创建标签
```bash
# 自动检测提交中的版本升级
# 自动检测版本升级
quicommit tag
# 指定版本升级
# 指定版本升级类型
quicommit tag --bump minor
# 自定义标签名
quicommit tag -n v1.0.0
```
### 4. 生成变更日志
### 生成变更日志
```bash
# 未发布变更生成
# 生成未发布变更的变更日志
quicommit changelog
# 为特定版本生成
quicommit changelog -v 1.0.0
# 初始化新的变更日志
quicommit changelog --init
```
## ⚙️ 配置说明
### 多配置管理
为不同场景管理多个Git身份
### 配置管理
```bash
# 添加新配置
@@ -101,132 +87,86 @@ quicommit profile list
# 切换配置
quicommit profile switch
# 应用配置到当前仓库
quicommit profile apply
# 为当前仓库设置配置
# 设置当前仓库的配置
quicommit profile set-repo personal
```
### LLM提供商配置
#### Ollama本地部署 - 推荐)
### LLM配置
```bash
# 配置Ollama
# 配置Ollama(本地)
quicommit config set-llm ollama
# 或使用特定设置
quicommit config set-ollama --url http://localhost:11434 --model llama2
```
#### OpenAI
```bash
# 配置OpenAI
quicommit config set-llm openai
quicommit config set-openai-key YOUR_API_KEY
```
#### Anthropic Claude
```bash
# 配置Anthropic Claude
quicommit config set-llm anthropic
quicommit config set-anthropic-key YOUR_API_KEY
# 配置Kimi
quicommit config set-llm kimi
quicommit config set-kimi-key YOUR_API_KEY
# 配置DeepSeek
quicommit config set-llm deepseek
quicommit config set-deepseek-key YOUR_API_KEY
# 配置OpenRouter
quicommit config set-llm openrouter
quicommit config set-openrouter-key YOUR_API_KEY
# 测试LLM连接
quicommit config test-llm
```
### 提交格式配置
## 命令参考
```bash
# 使用规范化提交(默认)
quicommit config set-commit-format conventional
| 命令 | 别名 | 说明 |
|------|------|------|
| `quicommit init` | `i` | 初始化配置 |
| `quicommit commit` | `c` | 生成并执行提交 |
| `quicommit tag` | `t` | 生成并创建标签 |
| `quicommit changelog` | `cl` | 生成变更日志 |
| `quicommit profile` | `p` | 管理Git配置 |
| `quicommit config` | `cfg` | 管理应用配置 |
# 使用commitlint格式
quicommit config set-commit-format commitlint
```
### commit命令选项
## 📖 使用示例
| 选项 | 说明 |
|------|------|
| `-t, --commit-type` | 提交类型feat、fix等 |
| `-s, --scope` | 提交范围 |
| `-m, --message` | 提交描述 |
| `--body` | 提交正文 |
| `--breaking` | 标记为破坏性变更 |
| `--manual` | 手动输入跳过AI生成 |
| `-a, --all` | 暂存所有更改 |
| `-S, --sign` | GPG签名提交 |
| `--amend` | 修改上一次提交 |
| `--dry-run` | 试运行,不实际提交 |
| `-y, --yes` | 跳过确认提示 |
### 交互式提交流程
### tag命令选项
```bash
$ quicommit commit
已暂存文件 (3)
• src/main.rs
• src/lib.rs
• Cargo.toml
| 选项 | 说明 |
|------|------|
| `-n, --name` | 标签名称 |
| `-b, --bump` | 版本升级类型major/minor/patch |
| `-m, --message` | 标签信息 |
| `-g, --generate` | AI生成标签信息 |
| `-S, --sign` | GPG签名标签 |
| `--lightweight` | 创建轻量标签 |
| `--push` | 推送到远程 |
| `-y, --yes` | 跳过确认提示 |
🤖 AI正在分析您的变更...
## 配置文件
────────────────────────────────────────────────────────────
生成的提交信息:
────────────────────────────────────────────────────────────
feat: 添加用户认证模块
实现OAuth2认证支持GitHub和Google提供商。
────────────────────────────────────────────────────────────
您希望如何操作?
> ✓ 接受并提交
🔄 重新生成
✏️ 编辑
❌ 取消
```
### 配置管理
```bash
# 创建工作配置
$ quicommit profile add
配置名称work
Git用户名John Doe
Git邮箱john@company.com
这是工作配置吗yes
组织机构Acme Corp
# 为当前仓库设置
$ quicommit profile set-repo work
✓ 已为当前仓库设置'work'配置
```
### 智能标签管理
```bash
$ quicommit tag
最新版本v0.1.0
版本选择:
> 从提交中自动检测升级
主版本升级
次版本升级
修订版本升级
🤖 AI正在从15个提交中生成标签信息...
标签预览:
名称: v0.2.0
信息:
## 变更内容
### 🚀 新功能
- 添加用户认证
- 实现仪表板
### 🐛 问题修复
- 修复登录重定向问题
创建此标签yes
✓ 已创建标签 v0.2.0
```
## 📁 配置文件
配置存储位置:
- **Linux**: `~/.config/quicommit/config.toml`
- **macOS**: `~/Library/Application Support/quicommit/config.toml`
- **Windows**: `%APPDATA%\quicommit\config.toml`
配置文件示例:
配置文件位置:
- Linux/macOS: `~/.config/quicommit/config.toml`
- Windows: `%APPDATA%\quicommit\config.toml`
```toml
version = "1"
@@ -248,11 +188,16 @@ organization = "Acme Corp"
provider = "ollama"
max_tokens = 500
temperature = 0.7
timeout = 30
[llm.ollama]
url = "http://localhost:11434"
model = "llama2"
[llm.openai]
model = "gpt-4"
base_url = "https://api.openai.com/v1"
[commit]
format = "conventional"
auto_generate = true
@@ -261,7 +206,6 @@ max_subject_length = 100
[tag]
version_prefix = "v"
auto_generate = true
include_changelog = true
[changelog]
path = "CHANGELOG.md"
@@ -269,69 +213,82 @@ auto_generate = true
group_by_type = true
```
## 🔧 环境变量
## 环境变量
| 变量名 | 说明 |
|--------|------|
| `QUICOMMIT_CONFIG` | 配置文件路径 |
| `EDITOR` | 交互式输入的默认编辑器 |
| `EDITOR` | 默认编辑器 |
| `NO_COLOR` | 禁用彩色输出 |
## 💻 Shell补全
### Bash
```bash
quicommit completions bash > /etc/bash_completion.d/quicommit
```
### Zsh
```bash
quicommit completions zsh > /usr/local/share/zsh/site-functions/_quicommit
```
### Fish
```bash
quicommit completions fish > ~/.config/fish/completions/quicommit.fish
```
## 🔍 故障排除
### LLM连接问题
## 故障排除
```bash
# 查看当前配置
quicommit config list
# 测试LLM连接
quicommit config test-llm
# 列出可用模型
quicommit config list-models
# 编辑配置文件
quicommit config edit
```
### Git操作
## 贡献
```bash
# 查看当前配置
quicommit profile show
欢迎贡献代码!请遵循以下步骤:
# 应用配置修复git配置
quicommit profile apply
```
## 🤝 贡献指南
欢迎提交贡献请随时提交Pull Request。
### 提交Pull Request
1. Fork本仓库
2. 创建您的功能分支 (`git checkout -b feature/amazing-feature`)
3. 提交您的变更 (`git commit -m 'feat: 添加令人惊叹的功能'`)
4. 推送分支 (`git push origin feature/amazing-feature`)
2. 创建功能分支`git checkout -b feature/your-feature`
3. 提交变更:`git commit -m 'feat: 添加功能'`
4. 推送分支`git push origin feature/your-feature`
5. 提交Pull Request
## 📄 许可证
### 开发环境
本项目采用MIT或Apache-2.0许可证。
```bash
# 克隆仓库
git clone https://github.com/YOUR_USERNAME/quicommit.git
cd quicommit
## 🙏 致谢
# 安装依赖
cargo fetch
- [Conventional Commits](https://www.conventionalcommits.org/) 规范
- [Keep a Changelog](https://keepachangelog.com/) 格式
- [Ollama](https://ollama.ai/) 本地LLM支持
# 开发模式运行
cargo run -- commit --help
# 运行测试
cargo test
# 代码检查
cargo clippy
cargo fmt --check
```
### 代码规范
- 遵循Rust官方代码风格运行 `cargo fmt`
- 提交信息使用Conventional Commits格式
- 新增功能应添加测试
- 运行 `cargo clippy` 确保无警告
### 项目结构
```
src/
├── commands/ # CLI命令实现
├── config/ # 配置管理
├── generator/ # AI内容生成
├── git/ # Git操作封装
├── llm/ # LLM提供商实现
└── utils/ # 工具函数
```
## 许可证
MIT License

View File

@@ -6,6 +6,20 @@ use dialoguer::{Confirm, Input, Select};
use crate::config::manager::ConfigManager;
use crate::config::{CommitFormat, LlmConfig};
/// Mask API key with asterisks for security
fn mask_api_key(key: Option<&str>) -> String {
match key {
Some(k) => {
if k.len() <= 8 {
"*".repeat(k.len())
} else {
format!("{}***{}", &k[..4], &k[k.len()-4..])
}
}
None => "✗ not set".red().to_string(),
}
}
/// Manage configuration settings
#[derive(Parser)]
pub struct ConfigCommand {
@@ -18,6 +32,9 @@ enum ConfigSubcommand {
/// Show current configuration
Show,
/// List all configuration information (with masked API keys)
List,
/// Edit configuration file
Edit,
@@ -54,6 +71,24 @@ enum ConfigSubcommand {
key: String,
},
/// Set Kimi API key
SetKimiKey {
/// API key
key: String,
},
/// Set DeepSeek API key
SetDeepSeekKey {
/// API key
key: String,
},
/// Set OpenRouter API key
SetOpenRouterKey {
/// API key
key: String,
},
/// Configure Ollama settings
SetOllama {
/// Ollama server URL
@@ -64,6 +99,36 @@ enum ConfigSubcommand {
model: Option<String>,
},
/// Configure Kimi settings
SetKimi {
/// API base URL (for custom endpoints)
#[arg(short, long)]
base_url: Option<String>,
/// Model name
#[arg(short, long)]
model: Option<String>,
},
/// Configure DeepSeek settings
SetDeepSeek {
/// API base URL (for custom endpoints)
#[arg(short, long)]
base_url: Option<String>,
/// Model name
#[arg(short, long)]
model: Option<String>,
},
/// Configure OpenRouter settings
SetOpenRouter {
/// API base URL (for custom endpoints)
#[arg(short, long)]
base_url: Option<String>,
/// Model name
#[arg(short, long)]
model: Option<String>,
},
/// Set commit format
SetCommitFormat {
/// Format (conventional, commitlint)
@@ -114,13 +179,20 @@ impl ConfigCommand {
pub async fn execute(&self) -> Result<()> {
match &self.command {
Some(ConfigSubcommand::Show) => self.show_config().await,
Some(ConfigSubcommand::List) => self.list_config().await,
Some(ConfigSubcommand::Edit) => self.edit_config().await,
Some(ConfigSubcommand::Set { key, value }) => self.set_value(key, value).await,
Some(ConfigSubcommand::Get { key }) => self.get_value(key).await,
Some(ConfigSubcommand::SetLlm { provider }) => self.set_llm(provider.as_deref()).await,
Some(ConfigSubcommand::SetOpenAiKey { key }) => self.set_openai_key(key).await,
Some(ConfigSubcommand::SetAnthropicKey { key }) => self.set_anthropic_key(key).await,
Some(ConfigSubcommand::SetKimiKey { key }) => self.set_kimi_key(key).await,
Some(ConfigSubcommand::SetDeepSeekKey { key }) => self.set_deepseek_key(key).await,
Some(ConfigSubcommand::SetOpenRouterKey { key }) => self.set_openrouter_key(key).await,
Some(ConfigSubcommand::SetOllama { url, model }) => self.set_ollama(url.as_deref(), model.as_deref()).await,
Some(ConfigSubcommand::SetKimi { base_url, model }) => self.set_kimi(base_url.as_deref(), model.as_deref()).await,
Some(ConfigSubcommand::SetDeepSeek { base_url, model }) => self.set_deepseek(base_url.as_deref(), model.as_deref()).await,
Some(ConfigSubcommand::SetOpenRouter { base_url, model }) => self.set_openrouter(base_url.as_deref(), model.as_deref()).await,
Some(ConfigSubcommand::SetCommitFormat { format }) => self.set_commit_format(format).await,
Some(ConfigSubcommand::SetVersionPrefix { prefix }) => self.set_version_prefix(prefix).await,
Some(ConfigSubcommand::SetChangelogPath { path }) => self.set_changelog_path(path).await,
@@ -137,7 +209,7 @@ impl ConfigCommand {
let manager = ConfigManager::new()?;
let config = manager.config();
println!("{}", "\nQuicCommit Configuration".bold());
println!("{}", "\nQuiCommit Configuration".bold());
println!("{}", "".repeat(60));
println!("\n{}", "General:".bold());
@@ -159,13 +231,27 @@ impl ConfigCommand {
}
"openai" => {
println!(" Model: {}", config.llm.openai.model.cyan());
println!(" API key: {}",
if config.llm.openai.api_key.is_some() { "✓ set".green() } else { "✗ not set".red() });
println!(" Base URL: {}", config.llm.openai.base_url);
println!(" API key: {}", mask_api_key(config.llm.openai.api_key.as_deref()));
}
"anthropic" => {
println!(" Model: {}", config.llm.anthropic.model.cyan());
println!(" API key: {}",
if config.llm.anthropic.api_key.is_some() { "✓ set".green() } else { "✗ not set".red() });
println!(" API key: {}", mask_api_key(config.llm.anthropic.api_key.as_deref()));
}
"kimi" => {
println!(" Model: {}", config.llm.kimi.model.cyan());
println!(" Base URL: {}", config.llm.kimi.base_url);
println!(" API key: {}", mask_api_key(config.llm.kimi.api_key.as_deref()));
}
"deepseek" => {
println!(" Model: {}", config.llm.deepseek.model.cyan());
println!(" Base URL: {}", config.llm.deepseek.base_url);
println!(" API key: {}", mask_api_key(config.llm.deepseek.api_key.as_deref()));
}
"openrouter" => {
println!(" Model: {}", config.llm.openrouter.model.cyan());
println!(" Base URL: {}", config.llm.openrouter.base_url);
println!(" API key: {}", mask_api_key(config.llm.openrouter.api_key.as_deref()));
}
_ => {}
}
@@ -192,6 +278,105 @@ impl ConfigCommand {
Ok(())
}
/// List all configuration information with masked API keys
async fn list_config(&self) -> Result<()> {
let manager = ConfigManager::new()?;
let config = manager.config();
println!("{}", "\nQuiCommit Configuration".bold());
println!("{}", "".repeat(80));
println!("\n{}", "📁 General Configuration:".bold().blue());
println!(" Config file: {}", manager.path().display());
println!(" Default profile: {}",
config.default_profile.as_deref().unwrap_or("(none)").cyan());
println!(" Profiles: {} profile(s)", config.profiles.len());
println!(" Repository mappings: {} mapping(s)", config.repo_profiles.len());
println!("\n{}", "🤖 LLM Configuration:".bold().blue());
println!(" Provider: {}", config.llm.provider.cyan());
println!(" Max tokens: {}", config.llm.max_tokens);
println!(" Temperature: {}", config.llm.temperature);
println!(" Timeout: {}s", config.llm.timeout);
println!("\n{}", " LLM Provider Details:".dimmed());
// OpenAI
println!(" 🔹 OpenAI:");
println!(" Model: {}", config.llm.openai.model.cyan());
println!(" Base URL: {}", config.llm.openai.base_url);
println!(" API Key: {}", mask_api_key(config.llm.openai.api_key.as_deref()));
// Anthropic
println!(" 🔹 Anthropic:");
println!(" Model: {}", config.llm.anthropic.model.cyan());
println!(" API Key: {}", mask_api_key(config.llm.anthropic.api_key.as_deref()));
// Kimi
println!(" 🔹 Kimi (Moonshot AI):");
println!(" Model: {}", config.llm.kimi.model.cyan());
println!(" Base URL: {}", config.llm.kimi.base_url);
println!(" API Key: {}", mask_api_key(config.llm.kimi.api_key.as_deref()));
// DeepSeek
println!(" 🔹 DeepSeek:");
println!(" Model: {}", config.llm.deepseek.model.cyan());
println!(" Base URL: {}", config.llm.deepseek.base_url);
println!(" API Key: {}", mask_api_key(config.llm.deepseek.api_key.as_deref()));
// OpenRouter
println!(" 🔹 OpenRouter:");
println!(" Model: {}", config.llm.openrouter.model.cyan());
println!(" Base URL: {}", config.llm.openrouter.base_url);
println!(" API Key: {}", mask_api_key(config.llm.openrouter.api_key.as_deref()));
// Ollama
println!(" 🔹 Ollama:");
println!(" URL: {}", config.llm.ollama.url);
println!(" Model: {}", config.llm.ollama.model.cyan());
println!("\n{}", "📝 Commit Configuration:".bold().blue());
println!(" Format: {}", config.commit.format.to_string().cyan());
println!(" Auto-generate: {}", if config.commit.auto_generate { "✓ yes".green() } else { "✗ no".red() });
println!(" Allow empty: {}", if config.commit.allow_empty { "✓ yes".green() } else { "✗ no".red() });
println!(" GPG sign: {}", if config.commit.gpg_sign { "✓ yes".green() } else { "✗ no".red() });
println!(" Default scope: {}", config.commit.default_scope.as_deref().unwrap_or("(none)").cyan());
println!(" Max subject length: {}", config.commit.max_subject_length);
println!(" Require scope: {}", if config.commit.require_scope { "✓ yes".green() } else { "✗ no".red() });
println!(" Require body: {}", if config.commit.require_body { "✓ yes".green() } else { "✗ no".red() });
if !config.commit.body_required_types.is_empty() {
println!(" Body required types: {}", config.commit.body_required_types.join(", ").cyan());
}
println!("\n{}", "🏷️ Tag Configuration:".bold().blue());
println!(" Version prefix: '{}'", config.tag.version_prefix.cyan());
println!(" Auto-generate: {}", if config.tag.auto_generate { "✓ yes".green() } else { "✗ no".red() });
println!(" GPG sign: {}", if config.tag.gpg_sign { "✓ yes".green() } else { "✗ no".red() });
println!(" Include changelog: {}", if config.tag.include_changelog { "✓ yes".green() } else { "✗ no".red() });
println!(" Annotation template: {}", config.tag.annotation_template.as_deref().unwrap_or("(none)").cyan());
println!("\n{}", "📋 Changelog Configuration:".bold().blue());
println!(" Path: {}", config.changelog.path);
println!(" Auto-generate: {}", if config.changelog.auto_generate { "✓ yes".green() } else { "✗ no".red() });
println!(" Format: {}", format!("{:?}", config.changelog.format).cyan());
println!(" Include hashes: {}", if config.changelog.include_hashes { "✓ yes".green() } else { "✗ no".red() });
println!(" Include authors: {}", if config.changelog.include_authors { "✓ yes".green() } else { "✗ no".red() });
println!(" Group by type: {}", if config.changelog.group_by_type { "✓ yes".green() } else { "✗ no".red() });
if !config.changelog.custom_categories.is_empty() {
println!(" Custom categories: {} category(ies)", config.changelog.custom_categories.len());
}
println!("\n{}", "🎨 Theme Configuration:".bold().blue());
println!(" Colors: {}", if config.theme.colors { "✓ enabled".green() } else { "✗ disabled".red() });
println!(" Icons: {}", if config.theme.icons { "✓ enabled".green() } else { "✗ disabled".red() });
println!(" Date format: {}", config.theme.date_format.cyan());
println!("\n{}", "🔒 Security:".bold().blue());
println!(" Encrypt sensitive: {}", if config.encrypt_sensitive { "✓ yes".green() } else { "✗ no".red() });
Ok(())
}
async fn edit_config(&self) -> Result<()> {
let manager = ConfigManager::new()?;
crate::utils::editor::edit_file(manager.path())?;
@@ -263,7 +448,7 @@ impl ConfigCommand {
let provider = if let Some(p) = provider {
p.to_string()
} else {
let providers = vec!["ollama", "openai", "anthropic"];
let providers = vec!["ollama", "openai", "anthropic", "kimi", "deepseek", "openrouter"];
let idx = Select::new()
.with_prompt("Select LLM provider")
.items(&providers)
@@ -287,12 +472,86 @@ impl ConfigCommand {
.default("gpt-4".to_string())
.interact_text()?;
manager.config_mut().llm.openai.model = model;
let base_url: String = Input::new()
.with_prompt("Base URL (optional)")
.default("https://api.openai.com/v1".to_string())
.interact_text()?;
if base_url != "https://api.openai.com/v1" {
manager.config_mut().llm.openai.base_url = base_url;
}
}
"anthropic" => {
let api_key: String = Input::new()
.with_prompt("Anthropic API key")
.interact_text()?;
manager.set_anthropic_api_key(api_key);
let model: String = Input::new()
.with_prompt("Model")
.default("claude-3-sonnet-20240229".to_string())
.interact_text()?;
manager.config_mut().llm.anthropic.model = model;
}
"kimi" => {
let api_key: String = Input::new()
.with_prompt("Kimi API key")
.interact_text()?;
manager.set_kimi_api_key(api_key);
let model: String = Input::new()
.with_prompt("Model")
.default("moonshot-v1-8k".to_string())
.interact_text()?;
manager.config_mut().llm.kimi.model = model;
let base_url: String = Input::new()
.with_prompt("Base URL (optional)")
.default("https://api.moonshot.cn/v1".to_string())
.interact_text()?;
if base_url != "https://api.moonshot.cn/v1" {
manager.set_kimi_base_url(base_url);
}
}
"deepseek" => {
let api_key: String = Input::new()
.with_prompt("DeepSeek API key")
.interact_text()?;
manager.set_deepseek_api_key(api_key);
let model: String = Input::new()
.with_prompt("Model")
.default("deepseek-chat".to_string())
.interact_text()?;
manager.config_mut().llm.deepseek.model = model;
let base_url: String = Input::new()
.with_prompt("Base URL (optional)")
.default("https://api.deepseek.com/v1".to_string())
.interact_text()?;
if base_url != "https://api.deepseek.com/v1" {
manager.set_deepseek_base_url(base_url);
}
}
"openrouter" => {
let api_key: String = Input::new()
.with_prompt("OpenRouter API key")
.interact_text()?;
manager.set_openrouter_api_key(api_key);
let model: String = Input::new()
.with_prompt("Model")
.default("openai/gpt-3.5-turbo".to_string())
.interact_text()?;
manager.config_mut().llm.openrouter.model = model;
let base_url: String = Input::new()
.with_prompt("Base URL (optional)")
.default("https://openrouter.ai/api/v1".to_string())
.interact_text()?;
if base_url != "https://openrouter.ai/api/v1" {
manager.set_openrouter_base_url(base_url);
}
}
"ollama" => {
let url: String = Input::new()
@@ -332,6 +591,75 @@ impl ConfigCommand {
Ok(())
}
async fn set_kimi_key(&self, key: &str) -> Result<()> {
let mut manager = ConfigManager::new()?;
manager.set_kimi_api_key(key.to_string());
manager.save()?;
println!("{} Kimi API key set", "".green());
Ok(())
}
async fn set_deepseek_key(&self, key: &str) -> Result<()> {
let mut manager = ConfigManager::new()?;
manager.set_deepseek_api_key(key.to_string());
manager.save()?;
println!("{} DeepSeek API key set", "".green());
Ok(())
}
async fn set_openrouter_key(&self, key: &str) -> Result<()> {
let mut manager = ConfigManager::new()?;
manager.set_openrouter_api_key(key.to_string());
manager.save()?;
println!("{} OpenRouter API key set", "".green());
Ok(())
}
async fn set_kimi(&self, base_url: Option<&str>, model: Option<&str>) -> Result<()> {
let mut manager = ConfigManager::new()?;
if let Some(url) = base_url {
manager.set_kimi_base_url(url.to_string());
}
if let Some(m) = model {
manager.config_mut().llm.kimi.model = m.to_string();
}
manager.save()?;
println!("{} Kimi configuration updated", "".green());
Ok(())
}
async fn set_deepseek(&self, base_url: Option<&str>, model: Option<&str>) -> Result<()> {
let mut manager = ConfigManager::new()?;
if let Some(url) = base_url {
manager.set_deepseek_base_url(url.to_string());
}
if let Some(m) = model {
manager.config_mut().llm.deepseek.model = m.to_string();
}
manager.save()?;
println!("{} DeepSeek configuration updated", "".green());
Ok(())
}
async fn set_openrouter(&self, base_url: Option<&str>, model: Option<&str>) -> Result<()> {
let mut manager = ConfigManager::new()?;
if let Some(url) = base_url {
manager.set_openrouter_base_url(url.to_string());
}
if let Some(m) = model {
manager.config_mut().llm.openrouter.model = m.to_string();
}
manager.save()?;
println!("{} OpenRouter configuration updated", "".green());
Ok(())
}
async fn set_ollama(&self, url: Option<&str>, model: Option<&str>) -> Result<()> {
let mut manager = ConfigManager::new()?;

View File

@@ -22,7 +22,7 @@ pub struct InitCommand {
impl InitCommand {
pub async fn execute(&self) -> Result<()> {
println!("{}", "🚀 Initializing QuicCommit...".bold().cyan());
println!("{}", "🚀 Initializing QuiCommit...".bold().cyan());
let config_path = crate::config::AppConfig::default_path()?;
@@ -57,7 +57,7 @@ impl InitCommand {
manager.save()?;
println!("{}", "✅ QuicCommit initialized successfully!".bold().green());
println!("{}", "✅ QuiCommit initialized successfully!".bold().green());
println!("\nConfig file: {}", config_path.display());
println!("\nNext steps:");
println!(" 1. Create a profile: {}", "quicommit profile add".cyan());

View File

@@ -220,6 +220,72 @@ impl ConfigManager {
self.modified = true;
}
/// Get Kimi API key
pub fn kimi_api_key(&self) -> Option<&String> {
self.config.llm.kimi.api_key.as_ref()
}
/// Set Kimi API key
pub fn set_kimi_api_key(&mut self, key: String) {
self.config.llm.kimi.api_key = Some(key);
self.modified = true;
}
/// Get Kimi base URL
pub fn kimi_base_url(&self) -> &str {
&self.config.llm.kimi.base_url
}
/// Set Kimi base URL
pub fn set_kimi_base_url(&mut self, url: String) {
self.config.llm.kimi.base_url = url;
self.modified = true;
}
/// Get DeepSeek API key
pub fn deepseek_api_key(&self) -> Option<&String> {
self.config.llm.deepseek.api_key.as_ref()
}
/// Set DeepSeek API key
pub fn set_deepseek_api_key(&mut self, key: String) {
self.config.llm.deepseek.api_key = Some(key);
self.modified = true;
}
/// Get DeepSeek base URL
pub fn deepseek_base_url(&self) -> &str {
&self.config.llm.deepseek.base_url
}
/// Set DeepSeek base URL
pub fn set_deepseek_base_url(&mut self, url: String) {
self.config.llm.deepseek.base_url = url;
self.modified = true;
}
/// Get OpenRouter API key
pub fn openrouter_api_key(&self) -> Option<&String> {
self.config.llm.openrouter.api_key.as_ref()
}
/// Set OpenRouter API key
pub fn set_openrouter_api_key(&mut self, key: String) {
self.config.llm.openrouter.api_key = Some(key);
self.modified = true;
}
/// Get OpenRouter base URL
pub fn openrouter_base_url(&self) -> &str {
&self.config.llm.openrouter.base_url
}
/// Set OpenRouter base URL
pub fn set_openrouter_base_url(&mut self, url: String) {
self.config.llm.openrouter.base_url = url;
self.modified = true;
}
// Commit configuration
/// Get commit format

View File

@@ -89,6 +89,18 @@ pub struct LlmConfig {
#[serde(default)]
pub anthropic: AnthropicConfig,
/// Kimi (Moonshot AI) configuration
#[serde(default)]
pub kimi: KimiConfig,
/// DeepSeek configuration
#[serde(default)]
pub deepseek: DeepSeekConfig,
/// OpenRouter configuration
#[serde(default)]
pub openrouter: OpenRouterConfig,
/// Custom API configuration
#[serde(default)]
pub custom: Option<CustomLlmConfig>,
@@ -113,6 +125,9 @@ impl Default for LlmConfig {
openai: OpenAiConfig::default(),
ollama: OllamaConfig::default(),
anthropic: AnthropicConfig::default(),
kimi: KimiConfig::default(),
deepseek: DeepSeekConfig::default(),
openrouter: OpenRouterConfig::default(),
custom: None,
max_tokens: default_max_tokens(),
temperature: default_temperature(),
@@ -187,6 +202,81 @@ impl Default for AnthropicConfig {
}
}
/// Kimi (Moonshot AI) configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct KimiConfig {
/// API key
pub api_key: Option<String>,
/// Model to use
#[serde(default = "default_kimi_model")]
pub model: String,
/// API base URL (for custom endpoints)
#[serde(default = "default_kimi_base_url")]
pub base_url: String,
}
impl Default for KimiConfig {
fn default() -> Self {
Self {
api_key: None,
model: default_kimi_model(),
base_url: default_kimi_base_url(),
}
}
}
/// DeepSeek configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct DeepSeekConfig {
/// API key
pub api_key: Option<String>,
/// Model to use
#[serde(default = "default_deepseek_model")]
pub model: String,
/// API base URL (for custom endpoints)
#[serde(default = "default_deepseek_base_url")]
pub base_url: String,
}
impl Default for DeepSeekConfig {
fn default() -> Self {
Self {
api_key: None,
model: default_deepseek_model(),
base_url: default_deepseek_base_url(),
}
}
}
/// OpenRouter configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct OpenRouterConfig {
/// API key
pub api_key: Option<String>,
/// Model to use
#[serde(default = "default_openrouter_model")]
pub model: String,
/// API base URL (for custom endpoints)
#[serde(default = "default_openrouter_base_url")]
pub base_url: String,
}
impl Default for OpenRouterConfig {
fn default() -> Self {
Self {
api_key: None,
model: default_openrouter_model(),
base_url: default_openrouter_base_url(),
}
}
}
/// Custom LLM API configuration
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct CustomLlmConfig {
@@ -423,7 +513,7 @@ fn default_max_tokens() -> u32 {
}
fn default_temperature() -> f32 {
0.7
0.6
}
fn default_timeout() -> u64 {
@@ -450,6 +540,30 @@ fn default_anthropic_model() -> String {
"claude-3-sonnet-20240229".to_string()
}
fn default_kimi_model() -> String {
"moonshot-v1-8k".to_string()
}
fn default_kimi_base_url() -> String {
"https://api.moonshot.cn/v1".to_string()
}
fn default_deepseek_model() -> String {
"deepseek-chat".to_string()
}
fn default_deepseek_base_url() -> String {
"https://api.deepseek.com/v1".to_string()
}
fn default_openrouter_model() -> String {
"openai/gpt-3.5-turbo".to_string()
}
fn default_openrouter_base_url() -> String {
"https://openrouter.ai/api/v1".to_string()
}
fn default_commit_format() -> CommitFormat {
CommitFormat::Conventional
}

215
src/llm/deepseek.rs Normal file
View File

@@ -0,0 +1,215 @@
use super::{create_http_client, LlmProvider};
use anyhow::{bail, Context, Result};
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
use std::time::Duration;
/// DeepSeek API client
pub struct DeepSeekClient {
base_url: String,
api_key: String,
model: String,
client: reqwest::Client,
}
#[derive(Debug, Serialize)]
struct ChatCompletionRequest {
model: String,
messages: Vec<Message>,
#[serde(skip_serializing_if = "Option::is_none")]
max_tokens: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none")]
temperature: Option<f32>,
stream: bool,
}
#[derive(Debug, Serialize, Deserialize)]
struct Message {
role: String,
content: String,
}
#[derive(Debug, Deserialize)]
struct ChatCompletionResponse {
choices: Vec<Choice>,
}
#[derive(Debug, Deserialize)]
struct Choice {
message: Message,
}
#[derive(Debug, Deserialize)]
struct ErrorResponse {
error: ApiError,
}
#[derive(Debug, Deserialize)]
struct ApiError {
message: String,
#[serde(rename = "type")]
error_type: String,
}
impl DeepSeekClient {
/// Create new DeepSeek client
pub fn new(api_key: &str, model: &str) -> Result<Self> {
let client = create_http_client(Duration::from_secs(60))?;
Ok(Self {
base_url: "https://api.deepseek.com/v1".to_string(),
api_key: api_key.to_string(),
model: model.to_string(),
client,
})
}
/// Create with custom base URL
pub fn with_base_url(api_key: &str, model: &str, base_url: &str) -> Result<Self> {
let client = create_http_client(Duration::from_secs(60))?;
Ok(Self {
base_url: base_url.trim_end_matches('/').to_string(),
api_key: api_key.to_string(),
model: model.to_string(),
client,
})
}
/// Set timeout
pub fn with_timeout(mut self, timeout: Duration) -> Result<Self> {
self.client = create_http_client(timeout)?;
Ok(self)
}
/// Validate API key
pub async fn validate_key(&self) -> Result<bool> {
let url = format!("{}/models", self.base_url);
let response = self.client
.get(&url)
.header("Authorization", format!("Bearer {}", self.api_key))
.send()
.await
.context("Failed to validate DeepSeek API key")?;
if response.status().is_success() {
Ok(true)
} else if response.status().as_u16() == 401 {
Ok(false)
} else {
let status = response.status();
let text = response.text().await.unwrap_or_default();
bail!("DeepSeek API error: {} - {}", status, text)
}
}
}
#[async_trait]
impl LlmProvider for DeepSeekClient {
async fn generate(&self, prompt: &str) -> Result<String> {
let messages = vec![
Message {
role: "user".to_string(),
content: prompt.to_string(),
},
];
self.chat_completion(messages).await
}
async fn generate_with_system(&self, system: &str, user: &str) -> Result<String> {
let mut messages = vec![];
if !system.is_empty() {
messages.push(Message {
role: "system".to_string(),
content: system.to_string(),
});
}
messages.push(Message {
role: "user".to_string(),
content: user.to_string(),
});
self.chat_completion(messages).await
}
async fn is_available(&self) -> bool {
self.validate_key().await.unwrap_or(false)
}
fn name(&self) -> &str {
"deepseek"
}
}
impl DeepSeekClient {
async fn chat_completion(&self, messages: Vec<Message>) -> Result<String> {
let url = format!("{}/chat/completions", self.base_url);
let request = ChatCompletionRequest {
model: self.model.clone(),
messages,
max_tokens: Some(500),
temperature: Some(0.7),
stream: false,
};
let response = self.client
.post(&url)
.header("Authorization", format!("Bearer {}", self.api_key))
.header("Content-Type", "application/json")
.json(&request)
.send()
.await
.context("Failed to send request to DeepSeek")?;
let status = response.status();
if !status.is_success() {
let text = response.text().await.unwrap_or_default();
// Try to parse error
if let Ok(error) = serde_json::from_str::<ErrorResponse>(&text) {
bail!("DeepSeek API error: {} ({})", error.error.message, error.error.error_type);
}
bail!("DeepSeek API error: {} - {}", status, text);
}
let result: ChatCompletionResponse = response
.json()
.await
.context("Failed to parse DeepSeek response")?;
result.choices
.into_iter()
.next()
.map(|c| c.message.content.trim().to_string())
.ok_or_else(|| anyhow::anyhow!("No response from DeepSeek"))
}
}
/// Available DeepSeek models
pub const DEEPSEEK_MODELS: &[&str] = &[
"deepseek-chat",
"deepseek-coder",
];
/// Check if a model name is valid
pub fn is_valid_model(model: &str) -> bool {
DEEPSEEK_MODELS.contains(&model)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_model_validation() {
assert!(is_valid_model("deepseek-chat"));
assert!(!is_valid_model("invalid-model"));
}
}

216
src/llm/kimi.rs Normal file
View File

@@ -0,0 +1,216 @@
use super::{create_http_client, LlmProvider};
use anyhow::{bail, Context, Result};
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
use std::time::Duration;
/// Kimi API client (Moonshot AI)
pub struct KimiClient {
base_url: String,
api_key: String,
model: String,
client: reqwest::Client,
}
#[derive(Debug, Serialize)]
struct ChatCompletionRequest {
model: String,
messages: Vec<Message>,
#[serde(skip_serializing_if = "Option::is_none")]
max_tokens: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none")]
temperature: Option<f32>,
stream: bool,
}
#[derive(Debug, Serialize, Deserialize)]
struct Message {
role: String,
content: String,
}
#[derive(Debug, Deserialize)]
struct ChatCompletionResponse {
choices: Vec<Choice>,
}
#[derive(Debug, Deserialize)]
struct Choice {
message: Message,
}
#[derive(Debug, Deserialize)]
struct ErrorResponse {
error: ApiError,
}
#[derive(Debug, Deserialize)]
struct ApiError {
message: String,
#[serde(rename = "type")]
error_type: String,
}
impl KimiClient {
/// Create new Kimi client
pub fn new(api_key: &str, model: &str) -> Result<Self> {
let client = create_http_client(Duration::from_secs(60))?;
Ok(Self {
base_url: "https://api.moonshot.cn/v1".to_string(),
api_key: api_key.to_string(),
model: model.to_string(),
client,
})
}
/// Create with custom base URL
pub fn with_base_url(api_key: &str, model: &str, base_url: &str) -> Result<Self> {
let client = create_http_client(Duration::from_secs(60))?;
Ok(Self {
base_url: base_url.trim_end_matches('/').to_string(),
api_key: api_key.to_string(),
model: model.to_string(),
client,
})
}
/// Set timeout
pub fn with_timeout(mut self, timeout: Duration) -> Result<Self> {
self.client = create_http_client(timeout)?;
Ok(self)
}
/// Validate API key
pub async fn validate_key(&self) -> Result<bool> {
let url = format!("{}/models", self.base_url);
let response = self.client
.get(&url)
.header("Authorization", format!("Bearer {}", self.api_key))
.send()
.await
.context("Failed to validate Kimi API key")?;
if response.status().is_success() {
Ok(true)
} else if response.status().as_u16() == 401 {
Ok(false)
} else {
let status = response.status();
let text = response.text().await.unwrap_or_default();
bail!("Kimi API error: {} - {}", status, text)
}
}
}
#[async_trait]
impl LlmProvider for KimiClient {
async fn generate(&self, prompt: &str) -> Result<String> {
let messages = vec![
Message {
role: "user".to_string(),
content: prompt.to_string(),
},
];
self.chat_completion(messages).await
}
async fn generate_with_system(&self, system: &str, user: &str) -> Result<String> {
let mut messages = vec![];
if !system.is_empty() {
messages.push(Message {
role: "system".to_string(),
content: system.to_string(),
});
}
messages.push(Message {
role: "user".to_string(),
content: user.to_string(),
});
self.chat_completion(messages).await
}
async fn is_available(&self) -> bool {
self.validate_key().await.unwrap_or(false)
}
fn name(&self) -> &str {
"kimi"
}
}
impl KimiClient {
async fn chat_completion(&self, messages: Vec<Message>) -> Result<String> {
let url = format!("{}/chat/completions", self.base_url);
let request = ChatCompletionRequest {
model: self.model.clone(),
messages,
max_tokens: Some(500),
temperature: Some(0.7),
stream: false,
};
let response = self.client
.post(&url)
.header("Authorization", format!("Bearer {}", self.api_key))
.header("Content-Type", "application/json")
.json(&request)
.send()
.await
.context("Failed to send request to Kimi")?;
let status = response.status();
if !status.is_success() {
let text = response.text().await.unwrap_or_default();
// Try to parse error
if let Ok(error) = serde_json::from_str::<ErrorResponse>(&text) {
bail!("Kimi API error: {} ({})", error.error.message, error.error.error_type);
}
bail!("Kimi API error: {} - {}", status, text);
}
let result: ChatCompletionResponse = response
.json()
.await
.context("Failed to parse Kimi response")?;
result.choices
.into_iter()
.next()
.map(|c| c.message.content.trim().to_string())
.ok_or_else(|| anyhow::anyhow!("No response from Kimi"))
}
}
/// Available Kimi models
pub const KIMI_MODELS: &[&str] = &[
"moonshot-v1-8k",
"moonshot-v1-32k",
"moonshot-v1-128k",
];
/// Check if a model name is valid
pub fn is_valid_model(model: &str) -> bool {
KIMI_MODELS.contains(&model)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_model_validation() {
assert!(is_valid_model("moonshot-v1-8k"));
assert!(!is_valid_model("invalid-model"));
}
}

View File

@@ -6,10 +6,16 @@ use std::time::Duration;
pub mod ollama;
pub mod openai;
pub mod anthropic;
pub mod kimi;
pub mod deepseek;
pub mod openrouter;
pub use ollama::OllamaClient;
pub use openai::OpenAiClient;
pub use anthropic::AnthropicClient;
pub use kimi::KimiClient;
pub use deepseek::DeepSeekClient;
pub use openrouter::OpenRouterClient;
/// LLM provider trait
#[async_trait]
@@ -77,6 +83,21 @@ impl LlmClient {
.ok_or_else(|| anyhow::anyhow!("Anthropic API key not configured"))?;
Box::new(AnthropicClient::new(api_key, &config.anthropic.model)?)
}
"kimi" => {
let api_key = config.kimi.api_key.as_ref()
.ok_or_else(|| anyhow::anyhow!("Kimi API key not configured"))?;
Box::new(KimiClient::with_base_url(api_key, &config.kimi.model, &config.kimi.base_url)?)
}
"deepseek" => {
let api_key = config.deepseek.api_key.as_ref()
.ok_or_else(|| anyhow::anyhow!("DeepSeek API key not configured"))?;
Box::new(DeepSeekClient::with_base_url(api_key, &config.deepseek.model, &config.deepseek.base_url)?)
}
"openrouter" => {
let api_key = config.openrouter.api_key.as_ref()
.ok_or_else(|| anyhow::anyhow!("OpenRouter API key not configured"))?;
Box::new(OpenRouterClient::with_base_url(api_key, &config.openrouter.model, &config.openrouter.base_url)?)
}
_ => bail!("Unknown LLM provider: {}", config.provider),
};

229
src/llm/openrouter.rs Normal file
View File

@@ -0,0 +1,229 @@
use super::{create_http_client, LlmProvider};
use anyhow::{bail, Context, Result};
use async_trait::async_trait;
use serde::{Deserialize, Serialize};
use std::time::Duration;
/// OpenRouter API client
pub struct OpenRouterClient {
base_url: String,
api_key: String,
model: String,
client: reqwest::Client,
}
#[derive(Debug, Serialize)]
struct ChatCompletionRequest {
model: String,
messages: Vec<Message>,
#[serde(skip_serializing_if = "Option::is_none")]
max_tokens: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none")]
temperature: Option<f32>,
stream: bool,
}
#[derive(Debug, Serialize, Deserialize)]
struct Message {
role: String,
content: String,
}
#[derive(Debug, Deserialize)]
struct ChatCompletionResponse {
choices: Vec<Choice>,
}
#[derive(Debug, Deserialize)]
struct Choice {
message: Message,
}
#[derive(Debug, Deserialize)]
struct ErrorResponse {
error: ApiError,
}
#[derive(Debug, Deserialize)]
struct ApiError {
message: String,
#[serde(rename = "type")]
error_type: String,
}
impl OpenRouterClient {
/// Create new OpenRouter client
pub fn new(api_key: &str, model: &str) -> Result<Self> {
let client = create_http_client(Duration::from_secs(60))?;
Ok(Self {
base_url: "https://openrouter.ai/api/v1".to_string(),
api_key: api_key.to_string(),
model: model.to_string(),
client,
})
}
/// Create with custom base URL
pub fn with_base_url(api_key: &str, model: &str, base_url: &str) -> Result<Self> {
let client = create_http_client(Duration::from_secs(60))?;
Ok(Self {
base_url: base_url.trim_end_matches('/').to_string(),
api_key: api_key.to_string(),
model: model.to_string(),
client,
})
}
/// Set timeout
pub fn with_timeout(mut self, timeout: Duration) -> Result<Self> {
self.client = create_http_client(timeout)?;
Ok(self)
}
/// Validate API key
pub async fn validate_key(&self) -> Result<bool> {
let url = format!("{}/models", self.base_url);
let response = self.client
.get(&url)
.header("Authorization", format!("Bearer {}", self.api_key))
.header("HTTP-Referer", "https://quicommit.dev")
.header("X-Title", "QuiCommit")
.send()
.await
.context("Failed to validate OpenRouter API key")?;
if response.status().is_success() {
Ok(true)
} else if response.status().as_u16() == 401 {
Ok(false)
} else {
let status = response.status();
let text = response.text().await.unwrap_or_default();
bail!("OpenRouter API error: {} - {}", status, text)
}
}
}
#[async_trait]
impl LlmProvider for OpenRouterClient {
async fn generate(&self, prompt: &str) -> Result<String> {
let messages = vec![
Message {
role: "user".to_string(),
content: prompt.to_string(),
},
];
self.chat_completion(messages).await
}
async fn generate_with_system(&self, system: &str, user: &str) -> Result<String> {
let mut messages = vec![];
if !system.is_empty() {
messages.push(Message {
role: "system".to_string(),
content: system.to_string(),
});
}
messages.push(Message {
role: "user".to_string(),
content: user.to_string(),
});
self.chat_completion(messages).await
}
async fn is_available(&self) -> bool {
self.validate_key().await.unwrap_or(false)
}
fn name(&self) -> &str {
"openrouter"
}
}
impl OpenRouterClient {
async fn chat_completion(&self, messages: Vec<Message>) -> Result<String> {
let url = format!("{}/chat/completions", self.base_url);
let request = ChatCompletionRequest {
model: self.model.clone(),
messages,
max_tokens: Some(500),
temperature: Some(0.7),
stream: false,
};
let response = self.client
.post(&url)
.header("Authorization", format!("Bearer {}", self.api_key))
.header("Content-Type", "application/json")
.header("HTTP-Referer", "https://quicommit.dev")
.header("X-Title", "QuiCommit")
.json(&request)
.send()
.await
.context("Failed to send request to OpenRouter")?;
let status = response.status();
if !status.is_success() {
let text = response.text().await.unwrap_or_default();
// Try to parse error
if let Ok(error) = serde_json::from_str::<ErrorResponse>(&text) {
bail!("OpenRouter API error: {} ({})", error.error.message, error.error.error_type);
}
bail!("OpenRouter API error: {} - {}", status, text);
}
let result: ChatCompletionResponse = response
.json()
.await
.context("Failed to parse OpenRouter response")?;
result.choices
.into_iter()
.next()
.map(|c| c.message.content.trim().to_string())
.ok_or_else(|| anyhow::anyhow!("No response from OpenRouter"))
}
}
/// Popular OpenRouter models
pub const OPENROUTER_MODELS: &[&str] = &[
"openai/gpt-3.5-turbo",
"openai/gpt-4",
"openai/gpt-4-turbo",
"anthropic/claude-3-opus",
"anthropic/claude-3-sonnet",
"anthropic/claude-3-haiku",
"google/gemini-pro",
"meta-llama/llama-2-70b-chat",
"mistralai/mixtral-8x7b-instruct",
"01-ai/yi-34b-chat",
];
/// Check if a model name is valid
pub fn is_valid_model(model: &str) -> bool {
// Since OpenRouter supports many models, we'll allow any model name
// but provide some popular ones as suggestions
true
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_model_validation() {
assert!(is_valid_model("openai/gpt-4"));
assert!(is_valid_model("custom/model"));
}
}

View File

@@ -14,7 +14,7 @@ use commands::{
init::InitCommand, profile::ProfileCommand, tag::TagCommand,
};
/// QuicCommit - AI-powered Git assistant
/// QuiCommit - AI-powered Git assistant
///
/// A powerful tool that helps you generate conventional commits, tags, and changelogs
/// using AI (LLM APIs or local Ollama models). Manage multiple Git profiles for different

View File

@@ -9,7 +9,7 @@ fn test_cli_help() {
cmd.arg("--help");
cmd.assert()
.success()
.stdout(predicate::str::contains("QuicCommit"));
.stdout(predicate::str::contains("QuiCommit"));
}
#[test]