Works
Resilient Multi-Robot Navigation via Probabilistic Model Checking
This project explores the integration of probabilistic model checking (PMC) techniques to enhance the resilience and reliability of autonomous navigation in multi-robot systems, specifically using the Jackal robot platform. The study involves a comprehensive review of probabilistic modelling algorithms and their application in real-time decision-making for autonomous robots. Using the Robot Operating System (ROS), a simulated warehouse environment is developed in Gazebo to test collaborative navigation tasks between multiple Jackal robots.
Key elements include the use of Discrete-Time Markov Chains (DTMC) to model stochastic robot states such as battery level and distance travelled, with verification performed through the PRISM model checker. Experiments analyze the relationship between battery capacity and mission success rate, demonstrating that PMC integration improves fault tolerance and mission completion reliability in complex environments. Real-time data is visualized using RViz, and sensor integration is tested for future deployment on physical robots.
The results show that multi-robot collaboration, supported by probabilistic validation, significantly improves navigation resilience. The methodology lays the foundation for further development of autonomous systems capable of self-verification and real-time probabilistic reasoning.
​
Deep Neural Networks For Point Clouds

3D point clouds are uneven and unordered, unlike pictures, which are displayed in uniform dense grids; as a result, applying the convolution will be difficult. In this paper, we propose a convolution operation that can be applied to the point clouds to develop deep convolutional networks. To deal with the point clouds, the discrete kernels must be changed to continuous ones. We consider the local coordinates of the 3D points, composed of density and weight functions, as a nonlinear convolution. The weight functions coordinate with the multi-layer perceptron (MLP) and the density function. Secondly, Max-pooling is used as a pooling method, which enables us to significantly increase the network's capacity and performance. The experiments are done in the ModelNet40 and ShapeNet datasets, illustrating that the deep convolution on the point clouds is capable of achieving state-of-the-art performance on the classification and segmentation tasks.
K-Means Based Customer Persona Discovery

In this project, I explored unsupervised learning through K-Means clustering to segment retail customers based on two key features: Annual Income and Spending Score. I applied the Elbow Method to determine the optimal number of clusters, which was found to be. Each cluster represents a distinct customer persona, from high-income low spenders to low-income high spenders.
The project involved:
-
Performing Exploratory Data Analysis (EDA) to understand feature distributions.
-
Implementing the K-Means algorithm with Scikit-learn.
-
Visualizing clusters and centroids to interpret customer behavior patterns.
-
Deriving actionable insights to support targeted marketing strategies.
-
This hands-on experience deepened my understanding of unsupervised learning techniques and how data-driven segmentation can empower business decisions.
Cluster 1 (Red Color) -> earning high but spending less Cluster 2 (Blue Colr) -> average in terms of earning and spending Cluster 3 (Green Color) -> earning high and also spending high [TARGET SET] Cluster 4 (cyan Color) -> earning less but spending more Cluster 5 (magenta Color) -> Earning less , spending les
FloraVision: A Comparative Study of CNN Architectures
In this project, I developed and evaluated three different Convolutional Neural Network (CNN) models to classify flower images into five categories: daisy, dandelion, rose, sunflower, and tulip, using the Kaggle Flower Recognition dataset.
The models compared include:
-
A CNN model built from scratch with three convolutional layers.
-
A VGG19 model using transfer learning, with frozen base layers and custom dense layers.
A ResNet50 model, also using transfer learning with adjusted trainable layers. To enhance robustness, I implemented two dataset preparation methods: one using tf.keras.preprocessing.image_dataset_from_directory, and another by manually loading and preprocessing images using os, cv2, and NumPy. Both methods included data augmentation through ImageDataGenerator.
Key highlights:
-
Demonstrated differences in performance between manual preprocessing vs. built-in methods.
-
Analyzed model performance through training accuracy, validation accuracy, and loss curves.
-
Found that VGG19 performed best in the manual preprocessing method, while the custom CNN model outperformed others when using the direct dataset loading method.
-
Used the trained model to successfully predict flower classes on unseen data.
​
This project strengthened my understanding of image classification, data augmentation, model tuning, and transfer learning techniques in TensorFlow and Keras.
