Glossary
Definitions of terms used throughout the platform and documentation.
Analysis - the AI-powered process of comparing a vendor's proposal against your procurement requirements. Produces findings, scores, and recommendations.
Analysis run - a single execution of analysis on one bid. Each time you click "Analyze", that's one run.
Batch analysis - analyzing multiple bids in sequence with a single click ("Analyze All").
Bid - a vendor's submission for your procurement. Contains one or more uploaded documents (the proposal, supporting materials, etc.).
Captured entity - a person, company, or organization mentioned in the analyzed documents. Automatically detected and, for Latvian procurements, matched against the official business register.
Clarification request - a question the AI asks you during analysis when it encounters something ambiguous. The analysis pauses until you respond.
Concern - see "Finding".
Credits - the currency used to pay for analysis on the platform. Purchased via Stripe at 6 EUR per credit.
Critical - the highest severity level for findings. Indicates a potential deal-breaker, such as failing a mandatory requirement.
EDOC - a digitally signed document archive format used in Latvia. The platform extracts and parses all files inside automatically.
Evaluation criteria - scoring rules you define for a procurement. Either pass/fail or scored (0-100). The AI extracts values from each proposal based on these criteria.
Evidence - exact quotes from documents that support a finding. Included with every concern so you can verify the AI's reasoning.
Finding - an issue, observation, or positive note flagged by the AI during analysis. Also called a "concern" in some parts of the interface. Has a severity level (Critical, Major, Minor, Note, or Strength).
Lot - a sub-item within a procurement that vendors can bid on independently. Common in large public procurements.
Major - the second-highest severity level. A significant gap or unclear response that's risky but not necessarily a deal-breaker.
Minor - a low-severity finding. Small omissions or formatting issues unlikely to affect the overall outcome.
Note - an observation that isn't a problem. Worth knowing but doesn't require action.
Orchestrator - the main AI agent that coordinates the analysis. Plans what to check, delegates to specialist agents, and compiles the final report.
Parsing - converting uploaded documents (PDF, Word, etc.) from their visual format into structured text the AI can read and search.
Procurement - your project workspace for one purchasing process. Contains RFP documents, vendor bids, analysis results, and comparison data.
RFP - Request for Proposal. The documents describing what you're looking for - requirements, specifications, evaluation rules, contract terms.
Semantic search - a search method that finds content based on meaning rather than exact keywords. Used by the AI to locate relevant passages in large documents efficiently.
Strategic briefing - the executive summary at the top of analysis results. Includes a recommendation (Strong Submit, Submit with Improvements, Significant Revision Needed, or Do Not Submit), concern counts, and a plain-language summary.
Strength - a positive finding. Something the vendor did well or exceeded requirements on.
Verification - a quality control step where a separate AI agent reviews Critical and Major findings. Can confirm, downgrade, or remove findings.