Fig 1 Example 2D slice(s) of the 3D brain image showing both a normal subject (top) and a subject with glioma (bottom). Ref: CS446 UIUC
Results obtained from the U-net model
Results obtained from the U-net model. Ref.: N. Artrith, T. Morawietz, and J. Behler, Phys. Rev. B 83, 153101 (2011)
ORP titled, On the inclusion of long-range interactions among molecules in machine learning models.
Developed an ORP on the problem of inclusion of long-range (electrostatic) interactions in ML and DL algorithms. Analyzed different neural network schemes, especially deep high-dimensional neural networks (HDNNs), and whether they can efficiently learn and predict molecular charges learned from various charge partitioning schemes.
Studied different charge partitioning schemes like Hirshfeld, Charge Model 5, Merz-Singh-Kollman, and Natural Bonding Orbital methods used in molecular dynamics simulations.
Proposed modifications to the existing mathematical formulation and structure of HDNNs to be able to better predict molecular charges, the computational cost for implementing the project, potential setbacks, and alternate plans for the project.
Please find the details of my ORP report here.
Double Deep Q-Network (DQN) on the game of Breakout using the OpenAI Gym.
Results obtained from YOLO
Deep Convolutional Generative Adversarial Network Architecture. Ref: https://gluon.mxnet.io/chapter14_generative-adversarial-networks/dcgan.html
Results obtained from LSGAN and DSGAN
Effect of spectral normalization on the quality of predicted images
Visualizing the performance of the RNN model by creating a confusion matrix. The ground truth languages of samples are represented by rows in the matrix while predicted languages are represented by the columns.
Sample output generated by the RNN after training on Shakespeare is shown below.
TRIA:
He dew that with merry a man for the strange.
I then to the rash, so must came of the chamuness, and that I'll treason dost
the heaven! how there. The run of these thou instress
Which wast true come come on my tongue.
KATHARINE:
My lord, the crown English am a thanks, and I
have you weep you galls. O, I wast thy change;
And go turn of my love to the master.'
ARCHBISHOP OF YORK:
I'll find by dogs, noble.
SAMLET:
The matter were be true and treason
Free supples'd best the soldiered.
TITUS ANDRONICUS:
I ever a bood;
But one a stand have a court in thee: which man as thy break on my bed
'As oath a women; there and shake me; and whencul, comes the house
For them he wall; and no live away. Fies, sir.
A sample image from the PASCAL VOC 2007 dataset
The average precision on different classes are shown below after 55 epochs on Google Cloud (16GB RAM and 1 NVIDIA K80 GPU).
------- Class: aeroplane AP: 0.6468 -------
------- Class: bicycle AP: 0.4247 -------
------- Class: bird AP: 0.3491 -------
------- Class: boat AP: 0.3989 -------
------- Class: bottle AP: 0.1596 -------
------- Class: bus AP: 0.2318 -------
------- Class: car AP: 0.6456 -------
------- Class: cat AP: 0.3552 -------
------- Class: chair AP: 0.4179 -------
------- Class: cow AP: 0.2235 -------
------- Class: diningtable AP: 0.3586 -------
------- Class: dog AP: 0.3028 -------
------- Class: horse AP: 0.6846 -------
------- Class: motorbike AP: 0.5332 -------
------- Class: person AP: 0.7901 -------
------- Class: pottedplant AP: 0.2159 -------
------- Class: sheep AP: 0.2858 -------
------- Class: sofa AP: 0.2924 -------
------- Class: train AP: 0.5996 -------
------- Class: tvmonitor AP: 0.2998 -------
mAP: 0.4108
Avg loss: 0.1801034240768506
A sample image from the CIFAR-10 image classification dataset. Ref: https://www.cs.toronto.edu/~kriz/cifar.html
Implemented multi-layer neural networks (2 and 3 layers) from scratch on the CIFAR-10 image classification data set, to understand the fundamentals of neural networks and backpropagation.
Developed codes for forward and backward pass, and trained two- and three-layer networks with SGD and Adam optimizer.
Had experience with hyperparameter tuning and using proper train/test/validation data splits.
Results of using the SGD and Adam optimzers on the CIFAR-10 dataset. Note that Adam converges faster than SGD in the loss history plot which is expected. We also see overfitting in the classification plot given that our dataset was realtively small compared to the number of NN parameters.
Architecture of an ad hoc IR system. Source: Jurafsky and Martin 2009, sec. 23.2
Principal components of feature data and the clusters obtained (kmeans and hierarchical clustering) using Scikit learn’s PCA
In this project, we worked with abstracts of research papers published on different aspects of coronaviruses over the years. Our goal was to segement the abstracts into different clusters based on the similarities in the topics that the abstracts talk about.
Comparison of SGD with Momentum or Nesterov Accelerated Gradient, and Adam methods and their rate of convergence. (b) Image compression using PCA
Principal components of feature data and the clusters obtained (kmeans and hierarchical clustering) using Scikit learn’s PCA
In this project, we worked with IMDB movie reviews and developed different machine learning models to predict a given review as positive or negative.
An illustration of topic modelling Ref:Medium article
In this project, we worked with research papers published on different aspects of coronaviruses over the years. Our goal was to use topic modelling to know different areas each research paper talks about and answer some important questions regarding the viruses.
Use of spectral clustering to separate data that K-means cannot. (a) Original data (b) Clustering using K-means (c) Spectral Clustering
Images taken from CS 400 UIUC and GeeksForGeeks
First order corrections to chemical potential, grand potential and internal energy at a finite-temperature in a grand canonical ensemble