Skip to content

Santhosh Solomon

Head - Data Engineering, Advarisk

Linkedin | Github

🤓 Current Objective

Transitioning from a managerial role to an individual contributor position, driven by a passion for technology and open-source software (OSS). Offering 9 years of hands-on industrial experience in Python-based development, specializing in Data Engineering, Web Scraping, and RESTful API development.

Eager to join teams committed to an engineering-first approach, focusing on developing high-quality SaaS or Fintech products.

💡 Technical Expertise

{
    "Programming Languages": [
        "Python 🐍",
        "Rust 🦀 🚧"
    ]
    "Skilled domains": [
        "Backend Engineering",
        "Web scraping",
        "Data Engineering"
    ]
    "Backend frameworks": [
        "FastAPI ⚡ ",
        "Flask",
        "SQLAlchemy",
        "Celery"
    ]
    "Databases": [
        "MySQL 🐬",
        "MongoDB",
        "Redis"
    ]
    "Data Engineering": [
        "Prefect",
        "Polars",
        "Pandas",
        "PyDantic"
    ]
    "Developer tools": [
        "Git",
        "JuPyter Notebooks  "
    ]
    "Cloud services": [
        "AWS"
    ]
    "Other skills": [
        "Continuous Integration",
        "Continuous Deployment",
        "Nginx",
        "Supervisor"
    ]
}

🛠️ Projects

🕷️ Scraper as a Service (Code name: Optimus Prime)

Project metadata

  • 🛠️ Tech Stack: Python, FastAPI, Celery, PyDantic, MySQL, Redis, PyTest
  • 👷 Role: Architect(1), Maintainer
  1. (Psst: Wrote v0.1 of the application all by myself 🤓 )

Key contributions

  • Architected and maintained a scalable, on-demand data scraping system.
  • Implemented Pub/Sub architecture using Celery for asynchronous processing.
  • Designed as an upstream app for product applications, capable of handling high traffic.
  • Dramatically improved scraper scalability, enabling handling of increased data demands.
  • Decoupled scraper code from downstream apps, allowing multiple products to consume data through a single API
  • Significantly reduced maintenance overhead by centralizing scraper code

Impacts

  • 💫 Put scrapers "on steroids" in terms of scalability
  • 🚀 Opened scope for serving multiple downstream applications
  • 🔮 Enabled multiple products to consume data through a single API application, removing the overhead of maintaining scraper code across multiple codebases

{Project Name 2}

Tech Stack: {Technologies used} - 🔹 {Key achievement or feature} - 🔹 {Another key point} - 🔹 {Impact or result}

👨‍💻 Work Experience

{Job Title} | {Company Name} | {Start Date} - {End Date}

  • 💼 {Key responsibility or achievement}
  • 💼 {Another significant contribution}
  • 💼 {Quantifiable impact or result}

{Previous Job Title} | {Company Name} | {Start Date} - {End Date}

  • 💼 {Key responsibility or achievement}
  • 💼 {Another significant contribution}
  • 💼 {Quantifiable impact or result}

🎓 Education

{Degree} in

{University Name} | Graduated: {Year}

🏆 Certifications

  • 🥇 {Certification Name} | {Issuing Organization} | {Year}
  • 🥈 {Certification Name} | {Issuing Organization} | {Year}

🌟 Skills Showcase

Skill Category Proficiency
{Category 1} ⭐⭐⭐⭐⭐
{Category 2} ⭐⭐⭐⭐
{Category 3} ⭐⭐⭐

"{A brief, impactful quote that represents your professional philosophy}"