标签:Evaluation

Coval

分类: AI Call Center AI Chatbot AI Developer Tools AI Testing AI Agent AI Monitor AI Speech-to-Text AI Speech Recognition AI Text-to-Speech AI Voice Assistants

Coval: AI agent simulation and evaluation platform for faster, reliable development.

GetScorecard

分类: AI Recruiting AI PDF AI Report Generator

Create reusable scorecards to assess candidates, skills, risks, and vendors.

parea.ai

分类: AI Consulting AI API AI Developer Tools AI Testing AI Monitor Large Language Models (LLMs)

Parea AI: Experimentation and human annotation platform for AI teams to ship LLM apps.

LLMonitor

分类: AI Developer Tools AI Testing Log Management AI Agent AI Monitor Large Language Models (LLMs) Open Source AI Models

Open-source AI platform for LLM chatbot management, observability, and evaluation.

laminar

分类: AI Developer Tools AI Agent AI Monitor Large Language Models (LLMs)

Open-source platform for tracing and evaluating AI applications.

Appen

分类: AI Developer Tools AI Models Large Language Models (LLMs)

Appen provides data and services to improve AI model performance and accelerate AI development.

nat.dev

分类: AI Developer Tools Large Language Models (LLMs) Open Source AI Models

Open-source playground for testing and evaluating LLMs.

PitchLeague.ai

分类: AI Coaching AI Investing AI Pitch Deck Generator AI Assistant AI Presentation Generator AI Report Generator

AI-powered platform for evaluating and improving startup pitch decks.

Terracotta

分类: AI Models Large Language Models (LLMs)

Terracotta simplifies LLM experimentation with a unified platform for training and evaluation.