I’m a student at the National Institute of Technology with a deep enthusiasm for Machine Learning and deriving insights from data. I enjoy coding in Python, especially for web development, where I create, deploy, and integrate scripts into web applications. My passion lies in combining data-driven approaches with software development to build solutions that solve real-world problems.
Machine Learning and Data Science enthusiast with a passion for extracting actionable insights from data and developing intelligent, scalable solutions. Proficient in Python and Django for full-stack web development, specializing in building, deploying, and integrating scripts into dynamic web applications. Skilled in Generative AI (GenAI), with expertise in tools like Hugging Face Transformers, OpenAI APIs, and fine-tuning models for custom applications. Hands-on experience with machine learning frameworks such as TensorFlow and Scikit-learn, and proficient in data analysis tools including Pandas, NumPy, and SQL. Adept at leveraging advanced AI/ML models and data pipelines to solve complex problems and create impactful, data-driven applications that meet real-world demands.
During my internship at Market Scope, I focused on developing AI-driven solutions to automate and optimize recruitment and hiring processes. My primary contribution was building a resume ranking model that utilized natural language processing (NLP) techniques and the cosine similarity function to quantify the relevance of resumes against job descriptions, enabling efficient and data-driven candidate shortlisting. Additionally, I developed a smart resume parsing system that leveraged named entity recognition (NER) and rule-based extraction to accurately extract, classify, and structure key candidate information, enhancing data integrity and usability. To support seamless data handling, I designed and implemented a scalable data management system, incorporating relational database design and indexing techniques to ensure optimized query performance and retrieval efficiency. This internship provided me with hands-on experience in machine learning, NLP, information retrieval, and database management, reinforcing my ability to develop intelligent automation solutions within the HR tech domain.
To Be Honest, a subsidary of Tatsya.com , specializes in Back-End Developement, Data Analytics and creating Machine Learning Models. We craft custom software solutions to enhance functionality and productivity across industries.
During my research and analytics work focused on Small and Medium Businesses (SMBs), I conducted in-depth analysis on the challenges faced by SMBs in scaling their operations, accessing resources, and improving profitability. I used data-driven insights to identify key obstacles such as limited access to funding, inadequate technology adoption, and market competition. My analysis helped in crafting strategic recommendations to improve business efficiency, optimize resource allocation, and develop targeted solutions aimed at supporting SMB growth and sustainability.
Grade: First class distinction.
Grade: First class distinction.
Explore some of my work below.
Built a Generative AI web app that creates social media posts with captions, hashtags, and images based on user prompts. Integrated OpenAI & Pexels APIs for content generation and visuals. Developed with Next.js, Django, and Django REST API, enabling real-time customization and interactive editing.
Built an AI-powered system to automate resume parsing, data extraction, and storage in Excel via App Sheet API. It accurately identifies key candidate details and features an interactive review interface for refinement, enhancing HR efficiency and data accuracy while reducing manual effort.
Open Library Project! This Django-based web app lets users explore and contribute to a collection of books and blogs. Features include user authentication for logging in, signing up, and managing accounts, as well as the ability to add personal blogs.
Health Track offers personalized diet plans and a Random Forest model predicting blood sugar with 65% accuracy. It also features an AI chatbot for instant health support.
SHELTER MAP is a web application designed to provide real-time assistance during natural disasters. It integrates data from multiple sources, including the OCHA database and satellite imagery, to offer accurate and timely information during emergencies.
Developed a real-time pricing solution using Google Maps, Yelp, and OpenWeatherMap APIs, analyzing data from 50+ competitors. Integrated dynamic pricing based on weather and traffic, achieving 80% accuracy in optimized menu price predictions, increasing revenue.
Implemented a lexicon-based sentiment analysis using TextBlob to classify 10,000+ customer reviews into positive, negative, or neutral categories, providing actionable insights to improve customer satisfaction and enhance business strategies.
Developed 4 interactive dashboards using Microsoft Power BI to visualize and analyze sales data across multiple regions, enabling real-time insights and improving sales performance tracking.
Extracted 1,000+ words of textual data from Blackcoffer web articles and performed text analysis to compute variables such as Positive Score, Negative Score, and Polarity Score. Achieved 95% accuracy in sentiment classification using NLP techniques and Python libraries like NLTK and TextBlob.
Below are the details to reach out to me!
Rourkela, Odisha