Understanding Verification Reports
How to interpret AI verification results and make informed decisions
Report Overview
Every completed verification generates a comprehensive report analyzing the submitted content across multiple dimensions. Reports combine automated AI analysis with structured scoring to provide actionable insights for administrators reviewing copyright registrations.
Understanding how to read and interpret these reports is essential for making informed approval or rejection decisions. This guide explains each section of the report and how to use the information effectively.
Report Structure
Verification reports are organized into distinct sections, each providing specific insights:
1. Executive Summary
The executive summary appears at the top of every report and provides a high-level overview of the verification results. This section is designed for quick assessment and decision-making.
- Overall Risk Score: Single aggregated score from 0-100
- Recommendation: Approve, reject, or investigate further
- Key Findings: Most significant concerns identified
- Processing Time: How long verification took
- Providers Used: Which AI providers contributed
2. Individual Confidence Scores
Each of the four verification types produces its own confidence score. These scores indicate how strongly the AI detected specific issues.
- Infringement Risk: 0-100 scale, higher means more similarity detected
- Laundering Risk: 0-100 scale, higher means manipulation detected
- Authenticity Score: 0-100 scale, higher means more authentic
- Duplicate Probability: 0-100 scale, higher means duplicate likely
3. Detailed Findings
Specific evidence and reasoning supporting the confidence scores. This section provides the "why" behind the numbers.
- Exact matches or similarities found
- Specific manipulation techniques detected
- AI generation indicators identified
- Duplicate records in database
- Supporting evidence and excerpts
4. Provider-Specific Results
Individual results from each AI provider that participated in the analysis. Different providers may identify different concerns.
- OpenAI GPT-4o text and audio analysis
- Anthropic Claude reasoning and context
- Google Cloud vision and transcription
- Custom ML model fingerprinting
5. Technical Details
Metadata and technical information about the verification process itself.
- Verification ID and timestamp
- Content type and file formats analyzed
- Processing duration and queue time
- Provider API versions and models used
- Any errors or warnings encountered
Understanding Risk Scores
All risk scores in the verification system use a consistent 0-100 scale. Understanding how to interpret these scores is crucial for making appropriate decisions.
Low Risk (0-30)
Content appears legitimate with minimal concerns detected. Generally safe to approve, but always review context and other factors.
Medium Risk (31-70)
Some concerns identified that warrant further investigation. Manual review recommended before making final decision.
High Risk (71-100)
Significant issues detected. Recommendation is typically rejection unless strong evidence supports legitimacy.
Overall Risk
Weighted combination of all individual scores. Most important metric for quick assessment and decision-making.
Interpreting Confidence Scores
Infringement Risk Score
Indicates similarity to existing copyrighted works. Higher scores suggest potential infringement.
- 0-20: No significant similarity found, unique content
- 21-40: Minor similarities that may be coincidental
- 41-60: Notable similarities requiring investigation
- 61-80: Strong similarities suggesting possible infringement
- 81-100: Very high similarity, likely infringement
Laundering Risk Score
Detects attempts to disguise copyrighted content through modifications. Higher scores indicate manipulation.
- 0-20: No manipulation indicators detected
- 21-40: Minor modifications that appear legitimate
- 41-60: Suspicious patterns requiring closer examination
- 61-80: Clear manipulation techniques identified
- 81-100: Strong evidence of copyright laundering
Authenticity Score
Measures likelihood that content is genuine human-created work. Note: Higher is better for authenticity.
- 0-20: Almost certainly AI-generated or fake
- 21-40: Strong AI generation indicators present
- 41-60: Uncertain authenticity, mixed signals
- 61-80: Likely authentic with minor concerns
- 81-100: High confidence in human authenticity
Duplicate Probability
Likelihood that content matches existing registrations. Higher scores indicate duplication.
- 0-20: No duplicates found, unique work
- 21-40: Some similarity but likely distinct work
- 41-60: Notable similarity to existing registrations
- 61-80: High similarity, likely duplicate
- 81-100: Near-certain duplicate registration
Reading Detailed Findings
The detailed findings section provides specific evidence supporting the confidence scores. Understanding how to interpret this information helps validate AI conclusions.
Types of Evidence Provided
- Text Matches: Specific lyric phrases or passages matching known works
- Audio Signatures: Waveform fingerprints with similarity percentages
- Melodic Patterns: Musical sequences identified in analysis
- Metadata Anomalies: Inconsistencies in file metadata
- AI Artifacts: Specific indicators of AI generation
- Database Matches: Registration IDs of similar works
Example: Infringement Finding
Similarity detected with "Song Title" by Artist Name - Registration ID: abc-123-xyz - Similarity Score: 78% - Matching Elements: * Chorus melody: 85% match * Chord progression: 92% match * Lyrical phrase: "specific phrase here" (exact match) - Analysis: High structural similarity suggests derivative work
Example: Laundering Detection
Manipulation indicators detected: - Pitch shifted by +2 semitones - Tempo increased by 8% - Spectral signature matches known work after normalization - File metadata shows recent conversion - Recommendation: Likely laundered content
Example: AI Detection
AI generation indicators found: - Vocal synthesis markers consistent with AI voice generation - Instrumental patterns match common ML model outputs - Lack of natural recording artifacts - Metadata inconsistent with claimed recording method - Confidence: 89% AI-generated
Provider-Specific Analysis
Different AI providers contribute unique perspectives to the verification. Understanding what each provider focuses on helps interpret conflicting results.
OpenAI Results
Text analysis, audio transcription, visual content recognition. Strong at natural language understanding and contextual analysis.
Anthropic Claude Results
Deep reasoning, legal context, complex pattern recognition. Excellent at explaining "why" behind detections.
Google Cloud Results
OCR, image labeling, speech-to-text. Specialized in visual and audio document processing.
Custom ML Results
Audio fingerprinting, waveform analysis, spectral comparison. Technical analysis of audio characteristics.
When Providers Disagree
If different providers produce conflicting results, consider these factors:
- Each provider analyzes different aspects of the content
- Check which provider specializes in the relevant area
- Look for consensus among majority of providers
- Consider the strength of evidence from each provider
- Review detailed findings for specific evidence
- When in doubt, err on side of caution with rejection
Making Decisions Based on Reports
Approval Guidelines
Consider approving when:
- Overall risk score below 30
- All individual scores in low-risk range
- No specific high-confidence concerns identified
- Provider consensus supports approval
- Detailed findings show no significant issues
- Authenticity score above 70
Rejection Guidelines
Consider rejecting when:
- Overall risk score above 70
- Any individual score in high-risk range (71+)
- Clear evidence of infringement or laundering
- Authenticity score below 40 (AI-generated)
- Duplicate probability above 80
- Strong provider consensus on issues
Investigation Guidelines
Require further investigation when:
- Overall risk score between 31-70
- Mixed individual scores (some high, some low)
- Providers disagree on findings
- Evidence is unclear or ambiguous
- High-value registration with medium risk
- Applicant can provide additional documentation
Important Considerations
- AI reports are tools to assist decisions, not replacement for human judgment
- Always read detailed findings, not just overall scores
- Document reasoning for decisions that contradict AI recommendations
- Consider broader context including registration history and applicant reputation
Common Report Scenarios
Scenario 1: Clear Approval
Overall Risk: 8
Infringement: 5, Laundering: 3, Authenticity: 95, Duplicates: 0
Action: Approve - All indicators positive, no concerns detected
Scenario 2: Clear Rejection
Overall Risk: 87
Infringement: 92, Laundering: 78, Authenticity: 88, Duplicates: 15
Action: Reject - High infringement and laundering risk
Scenario 3: AI-Generated
Overall Risk: 82
Infringement: 15, Laundering: 8, Authenticity: 22, Duplicates: 0
Action: Reject - AI-generated content not eligible
Scenario 4: Needs Investigation
Overall Risk: 48
Infringement: 52, Laundering: 28, Authenticity: 85, Duplicates: 12
Action: Investigate - Medium infringement risk requires review