Monte Carlo announced the launch of Agent Observability, a groundbreaking capability that provides end-to-end visibility across the data + AI stack. This enables teams to detect, triage, and resolve AI reliability issues in production, preventing costly data + AI downtime, preserving customer trust, and ensuring AI-powered products are accurate, relevant, and reliable.
With this release, Monte Carlo becomes the first vendor to unify observability across both data and AI stacks within a single platform, allowing teams to ensure the quality of agent inputs and outputs.
“BARC research finds that more than 40% of companies don’t trust the outputs of their AI/ML models,” said Kevin Petrie, VP of Research at BARC U.S. “To make AI safe, they must extend their data governance programs to mitigate the new risks of models and agents, with a focus on responsible outputs. Monte Carlo’s new capabilities help AI adopters achieve this with comprehensive observability.”
Marketing Technology News: MarTech Interview with Miguel Lopes, CPO @ TrafficGuard
Unified Observability for the Data + AI Era
A 2025 survey of data and AI leaders revealed that nearly 80 percent of their respective organizations have already adopted AI agents. But data + AI teams struggle to measure and monitor the reliability of these agents at scale, putting their ability to roll out production-ready, scalable AI products at risk. In fact, Gartner estimates enterprises are abandoning a striking 30% of AI initiatives primarily due to issues relating to data quality.
“For data and AI teams, reliability isn’t a ‘nice to have,’ it’s the foundation for building scalable, adopted, revenue-driving AI products,” said Barr Moses, co-founder and CEO of Monte Carlo. “When AI agents fail, the consequences are massive and long-standing: low adoption of costly and time-consuming work, erosion of customer trust, and a huge hit to the bottom line of the business. Point solutions to solve siloed problems simply won’t cut it anymore. Our customers need a unified approach to ensure their AI agents are behaving as they should, delivering trustworthy outputs, and driving real value.”
Recognized by analysts and rated #1 by G2, Gartner Peer Reviews, ISG and others, Monte Carlo has a proven track record of anticipating and solving the most complex reliability challenge for the world’s most forward-thinking enterprises like NASDAQ, Honeywell, Roche, Fox, and more. Now, with the launch of Agent Observability, Monte Carlo extends its category leadership to AI, fortifying its world-class observability platform with the capabilities needed for an Agent-built and AI-driven future.
Marketing Technology News: BambooHR and Marketing Architects Launch First National TV Campaign to Build Brand Visibility
Monitoring AI Agent Reliability in Production
Current point solutions are limited to either detecting reliability issues in the data inputs or model outputs, but not both. With this release, Monte Carlo breaks down silos and empowers teams to detect, triage, and resolve reliability issues from data ingestion and transformation to AI retrieval and response.
With Agent Observability, data + AI teams can detect poor AI outputs using LLM-as-judge or deterministic evaluations, as well as performance issues and failures. Users can set criteria for what “correct” AI output looks like, and get automatically alerted when agent responses underperform in production. The solution is highly customizable, allowing data + AI teams to monitor a diverse range of quality criteria, adapted to the requirements of each organization and use case.
Agent Observability also includes a suite of built-in low-code evaluations that address the most common factors impacting agents. These can detect when outputs become less relevant or less helpful to user queries, flag declines in clarity and readability, identify mismatches in language, and track whether tasks are being successfully completed.
Built-In Telemetry for Faster Root Cause Analysis
With Agent Observability, teams can quickly uncover the root cause of performance or reliability issues, reducing downtime and keeping AI agents running smoothly. It tracks key signals like prompts, completions, user queries, latency, and errors, giving teams a clear view into how agents are performing in production.
All telemetry is stored within the customer’s existing data warehouse, lakehouse or lake, making it easier to connect poor outputs back to underlying issues. Sensitive tracing data never leaves the customer’s infrastructure, ensuring both transparency and security.
Redefining Observability for the Data + AI Era
The introduction of Agent Observability cements Monte Carlo’s vision for the future of observability—one where data and AI are treated as a unified, interdependent ecosystem. Monte Carlo is the only vendor providing a single platform with visibility across ingestion, pipelines, prompts, LLMs, and outputs.
The post Monte Carlo Launches Agent Observability to Help Teams Build Reliable AI first appeared on PressReleaseCC.
Monte Carlo Launches Agent Observability to Help Teams Build Reliable AI first appeared on Web and IT News.
FUTR Announces Q1 2026 Financial Results Toronto, Ontario–(Newsfile Corp. – November 28, 2025) – The…
Tenet Reports Third Quarter 2025 Financial Results Toronto, Ontario–(Newsfile Corp. – November 28, 2025) –…
Eguana Announces Third Quarter 2025 Financial Results Calgary, Alberta–(Newsfile Corp. – November 28, 2025) –…
Yangaroo Announces Third Quarter 2025 Financial Results Reports Thirteen Consecutive Quarter of Positive Normalized EBITDA…
MineHub Announces Change of Fiscal Year End Vancouver, British Columbia–(Newsfile Corp. – November 28, 2025)…
DCS Promotes CEO and CFO to Further Align with Corporate Strategy San Diego, California–(Newsfile Corp.…
This website uses cookies.