LLM API Hosts
- Perplexity https://api.perplexity.ai/chat/completions
- Gemini CLI Workers https://gemini-cli-worker.mas-iyom.workers.dev/v1/chat/completions
- Cohere https://api.cohere.ai/compatibility/v1
- OpenAI Old https://api.openai.com/v1/chat/completions
- OpenAI New https://api.openai.com/v1/responses
- Portkey https://api.portkey.ai/v1/chat/completions
- Gemini https://generativelanguage.googleapis.com
- OpenRouter https://openrouter.ai/api/v1/chat/completions
- DeepSeek https://api.deepseek.com/v1/chat/completions
- Anthropic https://api.anthropic.com/v1/messages
- Moonshot https://api.moonshot.cn/v1/chat/completions
- Mistral https://api.mistral.ai/v1/responses
- xAI Grok https://api.x.ai/v1/chat/completions
- Cerebras https://api.cerebras.ai/v1/chat/completions
- Groq https://api.groq.com/openai/v1/chat/completions
- Portkey https://api.portkey.ai/v1/chat/completions
- Ollama http://localhost:8080/v1/chat/completions
- Fireworks https://api.fireworks.ai/inference/v1/chat/completions
- Nvidia https://integrate.api.nvidia.com/v1/chat/completions
LLM API Billing
- Deepseek DeepSeek Platform
- OpenAI OpenAI API
- Gemini Google AI Studio
- Anthropic Claude Console
- OpenRouter OpenRouter
- Moonshot Moonshot AI Open Platform
- Mistral Mistral AI
- Perplexity Perplexity
- Groq Groq
- xAI Grok xAI Cloud Console
- Cohere Cohere
- Qwen Alibaba Alibaba Cloud