VerifIA
VerifIA is an open-source AI testing framework for domain‑aware verification of machine‑learning models during the staging phase—before deployment.
Definition: The staging phase encompasses model training on the training set, hyperparameter tuning on the validation set, and performance evaluation on the test set. During this phase, models must satisfy domain-specific requirements and regulatory standards before advancing to production.
Fundamentally, VerifIA automates a structured sequence of verifications to assess model consistency with domain knowledge. At the end of each run, it generates a validation report to:
- Inform deployment decisions by operations teams.
- Guide engineers in debugging and enhancing pipeline robustness.
Why VerifIA?
Most production AI systems today rely on mature, open‑source frameworks—scikit‑learn, LightGBM, CatBoost, XGBoost, PyTorch, TensorFlow—that keep pace with the latest research and hardware accelerators. While these libraries empower practitioners to build state‑of‑the‑art models on large datasets, many organizations struggle to fully understand or control the resulting systems.
- Rush to Adopt: Businesses integrating AI into existing analytics workflows often end up with only a partial grasp of model behavior.
- Regulatory Pressure: Rising public concern has led to AI‑specific policies and soft laws mandating compliance with human‑centered legal requirements.
- Safety‑Critical Caution: Industries with zero‑tolerance for failure (e.g. aerospace, healthcare) have hesitated to deploy AI without rigorous quality assurance.
Even rare but catastrophic AI failures—whether due to distributional shifts, spurious correlations, or untested edge cases—are unacceptable in many domains.
VerifIA brings domain‑aware model verification to your staging pipeline:
-
Seamless Integration
-
AI‑Assisted Domain Creation
- Draft your domain YAML (variables, constraints, rules) automatically from sample data and documents
- Compatible with any LLM via LangChain (e.g. OpenAI, Anthropic, or on‑premise models)
-
Systematic Verification
- Run a battery of rule‑consistency checks derived from your domain knowledge
- Explore out‑of‑sample, in‑domain edge cases via population‑based searchers
-
Actionable Reports
- Generate an interactive validation report linked back to your staging model
- Operators decide deployment readiness
- Engineers gain debugging insights and quality‑improvement guidance
VerifIA Testing Workflow (Arrange–Act–Assert)
VerifIA adopts the well-known Arrange–Act–Assert pattern from software testing, adapted for AI model verification:
-
Arrange:
- Define application-specific expectations—the domain expert’s assertions about correct model behavior.
- Structure these assertions into a formal, machine-verifiable format.
-
Act:
- Generate synthetic input samples that reflect the original data distribution and explore beyond it.
- Support targeted generation within specific input subspaces for deeper analysis.
-
Assert:
- Evaluate the model’s responses against the arranged expectations.
- Quantify compliance levels to identify safe operational domains and uncover inconsistencies.
AI‑Powered Domain Generation
🚀 Preview: VerifIA Domain Spec Generator UI

Forget manual YAML editing—VerifIA can auto‑draft your entire domain spec in minutes.
Using LangChain‑compatible LLMs, it ingests:
- Your data (CSV, Parquet, DataFrame)
- Your documentation (PDFs, or vectordb)
- Your model card (YAML file or dict)
It then generates a ready‑to‑use domain.yaml
that includes:
- Variable definitions (types, real‑world ranges)
- Feasibility constraints (inter‑feature formulas)
- Behavioral rules (premises → conclusions)
Human‑in‑the‑Loop Experience
Through the built‑in Gradio interface, you can:
- Review the AI‑drafted spec side‑by‑side.
- Edit any section inline.
- Regenerate specific parts on demand.
- Validate against your schema before export.
👉 Dive into the AI‑Based Domain Generation Guide for a full walkthrough.
Quickstart
Install VerifIA and run your first verification in minutes:
from verifia.verification import RuleConsistencyVerifier
# 1. Arrange: load your domain rules
verifier = RuleConsistencyVerifier("domain_rules.yaml")
# 2. Act: attach model (card or dict) + data
report = (
verifier
.verify(model_card_fpath_or_dict="model_card.yaml")
.on(data_fpath="test_data.csv") # .csv, .json, .xlsx, .parquet, .feather, .pkl
.using("GA") # RS, FFA, MFO, GWO, MVO, PSO, WOA, GA, SSA
.run(pop_size=50, max_iters=100) # search budget
)
# 3. Assert: save and inspect your report
report.save_as_html("verification_report.html")
Learn VerifIA
-
Concepts
Explore VerifIA’s core abstractions—Data, Model, Domain, Searcher, and Run—to understand how they interconnect.
-
Tutorials
Hands‑on walkthroughes that guide you through the core components of VerifIA on simplified examples.
-
Guides
Deep‑dive guides on domain creation, AI‑Based Domain Generation, and configuration.
-
Use Cases
Discover how teams in finance, healthcare, and beyond can leverage VerifIA to boost model reliability.
Contribute
Help us improve VerifIA
Feel free to:
If you find VerifIA useful, please give us a star on ⭐️GitHub⭐️!
License
VerifIA Tool is available under:
License | Description |
---|---|
AGPL‑3.0 | Community‑driven, OSI‑approved open‑source license |
Enterprise | Commercial license for proprietary integration (no AGPL obligations). Mail to contact@verifia.ca. |
We champion open source by returning all improvements back to the community ❤️. Your contributions help advance the field for everyone.