标签:Inference
mancer
Unfiltered LLM inferencing service with free and paid models.
Petals
Collaborative tool for running large language models by distributing model parts.
Meta y
MetaY lets you earn rewards by contributing unused GPU power for AI tasks.
GPUX.AI
GPU platform for Dockerized applications and AI inference with cost savings.
cirrascale.com
Cloud solutions for AI development, training, and inference with various AI accelerators.
Hugging Face
AI community platform for open-source ML models, datasets, and applications.
AI Inferkit
Inferkit AI is a cost-effective LLM router for AI development.
Finetunefast
ML boilerplate to finetune and ship AI models in production quickly.
Prem
PremAI builds sovereign, private, and personalized AI solutions for enterprises and consumers.
WizModel
Platform for simplified ML model deployment, scaling, and inference via a unified API.
RunPod
RunPod offers cost-effective GPU rentals and serverless inference for AI development and scaling.
EnergeticAI
EnergeticAI optimizes TensorFlow.js for serverless AI applications with easy deployment.
Predibase
Predibase is a low-code AI platform for building, fine-tuning, and deploying state-of-the-art AI models, including LLMs.
Fluxaigen
Unified interface for LLMs offering better prices, uptime, and data policies.
AI Tools 99
Platform to run AI models on GPUs via API, pay-per-second billing.
Runware
Affordable, ultra-fast Stable Diffusion API for image generation and editing.
fireworks.ai
A platform for fast inference of generative AI models, including fine-tuning and deployment.
Cerebrium
Serverless AI infrastructure platform for building, deploying, and scaling AI applications with cost savings.
Together AI
AI Acceleration Cloud for fast inference, fine-tuning, and training.
Synexa AI
Synexa AI: Deploy AI models with one line of code, cost-effective, scalable GPU infrastructure.
OpenRouter
Unified interface for LLMs, offering access to various models and prices with better uptime.
TextSynth
TextSynth offers access to various AI models via API and playground.
The Complete Giude of Mistral 7B
A powerful, open-source LLM with various fine-tuned models for specific applications.
Deep Infra
A platform for deploying and running machine learning models with a simple API and pay-per-use pricing.
Outspeed
Outspeed: Build low-latency AI apps and deploy AI voice companions with emotions and memory.
Opnbook
TAHO optimizes AI, Cloud, and HPC infrastructure for efficiency and performance.
Union Cloud
Managed workflow orchestrator for building, managing, and monitoring data and ML pipelines.