πŸ“Š Dashboard

0
Total Analyses
0%
Average Risk Score
0
High Risk Analyses

Recent Analyses (Last 10)

πŸ“‹

No analyses yet. Start analyzing content to see results here.

πŸ“ File Upload

Upload images, audio, or video files for deepfake detection and media forensics analysis.

πŸ“€

Drag files here or click to upload

Supported: JPG, PNG, GIF, MP3, WAV, MP4, WebM (max 100MB)

πŸ“ Text Analysis

Analyze text for AI writing patterns and deepfake indicators.

πŸ”— URL Scanner

Check URL and domain reputation for suspicious patterns.

πŸ“§ Email Forensics

Analyze emails for phishing and fraud indicators.

πŸ“° Fake News Detection

Advanced fake news analysis using SIFT Method, CRAAP Test, and propaganda detection.

ℹ️ Fake Information Detection

Detect misinformation including scientific claims and bot amplification patterns.

🌐 Malicious Sites Detection

Detect typosquatting, phishing sites, and malicious domains.

πŸ”’ Cyber Fraud Detection

Detect investment scams, romance scams, phishing, and social engineering.

🧠 How It Works - Explainability & Responsible AI

Core Principles

πŸ”

Transparency

Every score is explainable. Hover over "🧠 why?" to understand how each analysis works.

πŸ”

Privacy-First

100% client-side processing. Your data never leaves your browser. No servers involved.

βš™οΈ

No Black Boxes

All algorithms are heuristic-based and interpretable. No neural networks or opaque ML models.

πŸ‘€

Human-in-the-Loop

This tool assists human judgmentβ€”it doesn't replace it. Always verify results independently.

Score Calculation Framework

All modules produce a risk score from 0-100:

  • 0-25: Low Risk (🟒 Green) - Content appears authentic
  • 25-50: Concern (🟑 Yellow) - Some suspicious indicators
  • 50-75: Suspicious (🟠 Orange) - Multiple warning signs
  • 75-100: High Risk (πŸ”΄ Red) - Strong likelihood of manipulation

Scores are probabilistic estimates, not certainties. Human review is always recommended.

SIFT Method (Fake News Detection)

We use the Stanford SIFT Method: Stop, Investigate, Find Better Coverage, Trace.

S
25%
I
30%
F
20%
T
25%

CRAAP Test (Source Evaluation)

Currency, Relevance, Authority, Accuracy, Purpose - five criteria for evaluating sources.

C
Currency
15%
R
Relevance
10%
A
Authority
25%
A
Accuracy
30%
P
Purpose
20%

Analysis Methodology by Module

File Upload (Image/Audio/Video Analysis)

β–Ό
Image Analysis: Uses Error Level Analysis (ELA) to detect recompression artifacts, FFT frequency analysis for pattern anomalies, noise consistency checking, and color channel examination.

Audio Analysis: Analyzes spectral consistency, pitch uniformity, noise floor baseline, and digital artifact patterns using Web Audio API.

Video Analysis: Extracts keyframes and performs image analysis on each frame, detecting inconsistencies across the video timeline.

Text Analysis (AI Writing Detection)

β–Ό
Detects AI-generated or manipulated text by analyzing: sentence length variance, vocabulary repetition patterns, perplexity indicators, hedging language frequency, clichΓ© detection, and stylistic anomalies.

URL Scanner (Domain & Link Analysis)

β–Ό
Checks: suspicious TLD patterns, URL length/encoding anomalies, typosquatting via Levenshtein distance against known domains, subdomain inconsistencies, parameter obfuscation, and domain age indicators.

Email Forensics (Phishing & Spam Detection)

β–Ό
Analyzes: urgency language patterns, suspicious sender address patterns, spoofed domain detection, embedded link safety, urgency trigger words (verify, confirm, act now, limited time), and phishing keyword detection.

Fake News Detection (Enhanced Framework)

β–Ό
SIFT Method: Stop (25%), Investigate (30%), Find Better Coverage (20%), Trace Claims (25%)

CRAAP Test: Currency (15%), Relevance (10%), Authority (25%), Accuracy (30%), Purpose (20%)

Readability: Flesch-Kincaid Grade Level for suspicious complexity patterns

Sensationalism: Forward-reference clickbait detection, information gaps, exaggeration patterns

Propaganda: 10 propaganda techniques (bandwagon, testimonial, loaded language, card-stacking, glittering generalities, transfer, repetition, fear appeal, hasty generalization, false cause)

Satire Detection: Irony markers, exaggeration patterns, parody indicators

Logical Fallacies: 12 types (ad hominem, strawman, appeal to authority, false dilemma, slippery slope, circular reasoning, begging the question, equivocation, hasty generalization, appeal to emotion, false analogy, post hoc ergo propter hoc)

Fake Info & Scientific Claims (Misinformation Detection)

β–Ό
Analyzes scientific claim hedging vs. absolutism, citation density, peer-review language, bot amplification patterns (hashtag spam, URL density, engagement pressure language), and consensus deviation indicators.

Malicious Sites Detection (Domain Safety)

β–Ό
Detects: typosquatting via Levenshtein similarity to popular domains, suspicious TLD patterns, URL obfuscation (encoding, redirects, abbreviated forms), domain age heuristics, and suspicious subdomains.

Cyber Fraud Detection (Scam Analysis)

β–Ό
Detects four fraud categories:

Investment Scams: Guaranteed returns, get-rich-quick language, Ponzi/MLM indicators

Romance Scams: Love bombing, urgency for money, travel/medical emergency pleas

Phishing/Social Engineering: Trust exploitation, urgency triggers, fake authority

Pressure Tactics: Time constraints, artificial scarcity, ultimatums

Limitations & Honest Disclosure

What This Tool Cannot Do

  • No Video Deepfake Detection: Advanced deepfake videos (face-swapping, voice cloning) require specialized ML models we intentionally avoid.
  • No Real-time Verification: We cannot verify live events or breaking news in real-time.
  • No Reverse Image Search: We cannot check if images have been reused (use Google Images for this).
  • No Fact-Checking Database: We don't have access to comprehensive fact-checking databases.
  • No Behavioral Analysis: We cannot analyze user behavior patterns or account creation dates.
  • No Content Context: Without broader context, some satire or sarcasm may be misclassified.

Academic Sources & References

Our Methodology is Based On:

  1. Stanford Internet Observatory's SIFT Method (Wardle & Starbird, 2021)
  2. Meriam Library's CRAAP Test for Source Evaluation
  3. Flesch-Kincaid Readability Index (1948)
  4. Aristotle's Rhetorical Appeals & Propaganda Techniques
  5. Logical Fallacies Taxonomy (Irving Copi, Formal Logic)
  6. Error Level Analysis (ELA) for Image Forensics (Neal Krawetz)
  7. Spectral Analysis for Audio Authenticity (IEEE Papers)
  8. Levenshtein Distance for Typosquatting Detection
  9. Misinformation Taxonomy (Wardle & Ward, 2019)

🀝 Ethical AI Commitment

ReWiseEd AI is committed to responsible AI principles: transparency in methodology, privacy-first design, no surveillance, avoiding algorithmic bias, and supporting human decision-making rather than replacing it. We believe technology should empower individuals to think critically, not automate critical thinking. Our tool is a helper, not an authority.