CheckYourMole – AI Mole Classification

Upload a mole image to see how AI analyzes it

⚠️ Educational only. Not a medical diagnosis.

πŸ”¬ Analyze Your Image

Original Image

No image selected

JPG/PNG (max 10MB)

πŸ“– View photo guidelines for best results

(Hint: Use macro mode & natural light)

Preprocessed Image

Awaiting analysis

Hair removal + enhancement

Grad-CAM Heatmap

Awaiting analysis

Model focus visualization

Shows which regions influenced the AI's decision

Red/yellow = high attention from model

Blue/purple = low attention

Preparing image...

⏳ First analysis may take 10-15 seconds

Diagnosis: Awaiting analysis

Confidence: -

⚠️ This is an AI prediction for educational purposes only. Always consult a healthcare professional.

πŸ“Έ Photo Guidelines for Best Results

βœ… Good Photos

  • Macro mode: Use phone's macro mode for close-ups
  • Natural lighting: Take photos in daylight
  • Focus: Mole should be sharp and centered
  • Distance: Fill 60-80% of frame with lesion
  • No flash: Avoid harsh shadows and glare
  • Flat surface: Keep phone parallel to skin

❌ Avoid

  • Blurry images: Hold phone steady
  • Too far: Lesion should be visible
  • Bad lighting: No shadows or overexposure
  • Filters: Upload original, unedited photos
  • Obscured lesions: Remove hair/bandages
  • Multiple moles: One lesion per photo

πŸ“± Phone Settings

  • Resolution: Use highest quality (12MP+)
  • HDR: Turn off HDR mode
  • Zoom: Use optical zoom, not digital
  • Format: Save as JPG or PNG
  • Stabilization: Enable image stabilization

🎯 Pro Tips

  • Use a ruler nearby for size reference
  • Take multiple photos from different angles
  • Avoid photos through glass or plastic
  • Clean the camera lens before shooting
  • For hairy areas, gently part hair to expose lesion

βš™οΈ Behind the Scenes

Processing Pipeline

πŸ“€
Upload

Your image is securely uploaded

β†’
πŸ”§
Preprocessing

Hair removal, contrast enhancement

β†’
🧠
AI Analysis

EfficientNetV2-B3 classification

β†’
🎨
Grad-CAM

Visual explanation

β†’
πŸ“Š
Results

Classification + confidence

πŸ” Preprocessing Steps

  • Hair Removal: Morphological filtering removes hair artifacts
  • CLAHE: Adaptive histogram equalization enhances local contrast
  • Color Normalization: Reduces lighting variations
  • Center Crop: Focuses on lesion region (85%)

πŸŽ“ What is Grad-CAM?

  • Gradient-weighted Class Activation Mapping
  • Shows which regions influenced the AI's decision
  • Red/yellow = high attention from model
  • Blue/purple = low attention
  • Verifies model looks at lesion, not background

🧠 Model Information & Training Journey

πŸš€ The Development Journey

This model represents months of iterative development and rigorous evaluation. Each training session involved approximately 2 hours of GPU computation across 70 epochs, processing over 10,000 dermoscopy images. Multiple training sessions were conducted to optimize hyperparameters, attention mechanisms, and preprocessing pipelines.

πŸ—‚οΈ Architecture

  • Base Model: EfficientNetV2-B3
  • Pretrained: ImageNet weights
  • Custom Head: Attention mechanism + Dense layers
  • Parameters: ~14M total, ~2M trainable
  • Input Size: 300Γ—300 RGB images

πŸ“š Training Data

  • Dataset: HAM10000 from ISIC Archive
  • Total Images: 10,015 dermoscopy images
  • Classes: Binary (Benign vs Malignant)
  • Malignant: Melanoma, BCC, Actinic Keratosis
  • Benign: Nevi, Seborrheic Keratosis, Vascular, Dermatofibroma

⚑ Training Process

  • Phase 1: 20 epochs with frozen backbone
  • Phase 2: 50 epochs with full fine-tuning
  • Total Time: ~2 hours on GPU
  • Optimizer: Adam (learning rate: 5e-4)
  • Data Split: 70% train, 15% validation, 15% test

βš™οΈ Training Details

  • Loss Function: Binary cross-entropy + attention penalty
  • Regularization: L2 (2e-5), Dropout (0.25)
  • Data Augmentation: Rotation, flip, zoom, brightness
  • Batch Size: 32 images per batch
  • Two-Phase Training: Head-only β†’ Full fine-tuning

πŸ“ˆ Final Performance Metrics

βœ… Clinical Validation Complete

The model was evaluated on 3,000 held-out test images (1,500 benign, 1,500 malignant) and successfully met all predefined clinical safety thresholds. The evaluation demonstrates reliable performance suitable for educational purposes.

🎯 Overall Performance

  • Accuracy: 83.9%
    Out of 3,000 test images, 2,517 were correctly classified. This means about 84 out of every 100 lesions are correctly identified.
  • AUC-ROC: 0.926
    Area Under the ROC Curve measures how well the model distinguishes between benign and malignant lesions across all confidence thresholds. A score of 0.926 (out of 1.0) indicates excellent discrimination ability. For reference: 0.5 = random guessing, 0.7-0.8 = acceptable, 0.8-0.9 = excellent, >0.9 = outstanding.

πŸ”¬ Malignant Detection

  • Sensitivity: 92.1%
    Of 1,500 malignant cases, the model correctly identified 1,382. This means out of 100 dangerous lesions, 92 are caught. High sensitivity is crucial for screening tools to minimize missed cancers.
  • False Negatives: 118 cases (7.9%)
    These are malignant lesions incorrectly classified as benign. While minimized through optimization, this is why professional medical consultation is essential for all suspicious lesions.

βœ… Benign Detection

  • Specificity: 75.7%
    Of 1,500 benign cases, the model correctly identified 1,135. This means about 76 out of 100 harmless lesions are correctly classified as benign, reducing unnecessary worry.
  • False Positives: 365 cases (24.3%)
    These are benign lesions incorrectly flagged as malignant. While higher than false negatives, this cautious approach prioritizes safety by erring on the side of detecting potential threats.

πŸŽ“ Precision & Predictive Values

  • Precision (PPV): 79.1%
    When the model predicts "malignant," it's correct about 79% of the time. Out of 1,747 malignant predictions, 1,382 were truly malignant. This measures how trustworthy a positive result is.
  • NPV (Negative Predictive Value): 90.6%
    When the model predicts "benign," it's correct about 91% of the time. This high NPV provides good reassurance when the model indicates a lesion is likely harmless.

πŸ’‘ What Do These Numbers Mean for You?

In simple terms, if you upload 100 skin lesion images:

β€’ ~84 will be correctly classified (overall accuracy)
β€’ If 50 are malignant, ~46 will be correctly detected (sensitivity 92.1%)
β€’ If 50 are benign, ~38 will be correctly identified (specificity 75.7%)
β€’ When it says "malignant," there's ~79% chance it's correct (precision)
β€’ When it says "benign," there's ~91% chance it's correct (NPV)

⚠️ Critical Reminder: These statistics are based on professional dermoscopy images. Phone camera photos may perform differently. This tool is for educational demonstration onlyβ€” always consult a dermatologist for actual medical decisions!

πŸ”¬ Clinical Safety Thresholds

All metrics exceeded predefined safety requirements for educational deployment:

Metric Required Threshold Achieved Result Status
Accuracy β‰₯ 70% 83.9% βœ… Pass
Sensitivity β‰₯ 85% 92.1% βœ… Pass
Specificity β‰₯ 70% 75.7% βœ… Pass
AUC-ROC β‰₯ 0.75 0.926 βœ… Pass
Precision (PPV) β‰₯ 50% 79.1% βœ… Pass
NPV β‰₯ 85% 90.6% βœ… Pass

⚠️ Important Limitations

  • This model is NOT a medical device and should not be used for clinical diagnosis
  • Trained on professional dermoscopy images; performance may vary with phone camera photos
  • Cannot detect all types of skin cancer or differentiate between specific malignant subtypes
  • Results should be interpreted as educational demonstrations, not medical advice
  • Always consult a dermatologist for evaluation of suspicious skin lesions

πŸ“Š Usage Statistics

Loading... Total Visitors
Loading... Images Analyzed
Loading... Avg Confidence
πŸ”’ Your images are analyzed in memory only β€” never stored or saved

🌐 Data Collection

Statistics are collected anonymously through Google Analytics and stored securely. Real-time analytics help improve the educational experience for all users.

πŸ” Privacy Note: No personal information or uploaded images are stored. Only anonymous usage statistics (page views, analysis count) are tracked for educational purposes.

πŸ’¬ Your Feedback Matters

πŸ’‘ What We're Looking For

  • User Experience: Is the interface intuitive and easy to use?
  • Model Performance: Did the results match your expectations?
  • Feature Requests: What would make this tool more useful?
  • Technical Issues: Any bugs or errors you encountered?
  • Educational Value: Did you learn something about AI and medical imaging?

πŸ’Ό For Recruiters & Hiring Managers

πŸš€ End-to-End ML Deployment Demonstration

This project demonstrates my ability to deliver a complete machine learning solution, from research and model training, to production deployment and analytics integration:

  • Deep Learning Architecture: Custom EfficientNetV2-B3 with spatial attention mechanisms for lesion-focused classification
  • Computer Vision Pipeline: Advanced preprocessing including morphological hair removal, CLAHE contrast enhancement, and color normalization
  • Explainable AI: Grad-CAM (Gradient-weighted Class Activation Mapping) for transparent model decision visualization
  • Full-Stack Development: Python backend (FastAPI/Flask) with responsive HTML/CSS/JavaScript frontend
  • Cloud Deployment: Production deployment on Hugging Face Spaces with automated CI/CD pipeline
  • Analytics Integration: Google Analytics for user tracking + Google Sheets API for real-time statistics and feedback collection
  • Clinical Validation: Rigorous evaluation against medical safety thresholds (92.1% sensitivity, 83.9% accuracy, 0.926 AUC-ROC)
  • Model Training: Two-phase transfer learning with data augmentation, regularization, and hyperparameter optimization
  • Documentation: Comprehensive technical documentation, model cards, and user guidelines
  • Responsible AI: Clear disclaimers, privacy safeguards (no data storage), and educational purpose emphasis

πŸ“‚ Technical Stack: Python, TensorFlow/Keras, OpenCV, NumPy, Pandas, FastAPI, HTML/CSS/JavaScript, Git, Hugging Face, Google Cloud APIs

πŸ“Š Metrics: 10,000+ images processed β€’ 70 training epochs β€’ ~2 hours GPU time each training session β€’ 14M parameters

🎯 Key Achievements

  • Built and deployed a production-ready AI application end-to-end
  • Exceeded all clinical safety thresholds for educational deployment
  • Implemented explainable AI to ensure model transparency
  • Created responsive, user-friendly interface with real-time analytics
  • Demonstrated responsible AI practices with clear limitations and disclaimers

πŸ“§ Let's Connect

Interested in discussing this project or potential opportunities? Feel free to reach out via LinkedIn or explore more projects on my Hugging Face profile.