• Kamalinejad E., "Visual Similarity through Assisted Invariant Deep Features", work in progress
  • Haley D., Kamalinejad E., Zhong F., "IsoClustering: A Generalized Framework for Local Data Clustering", 2018, submitted
  • Kamalinejad E., "On local well-posedness of the thin-film equation via the Wasserstein gradient flow", 2015, Springer: Calculus of Variations and Partial Differential Equations, Volume 52, Issue 3, pp 547–564
  • Kamalinejad E., Moradifam A., "Radial Symmetry of Large Solutions of Semilinear Elliptic Equations with Convection",2014, Proceedings of the Royal Society of Edinburgh: Section A Mathematics, Volume 144, Issue 1, pp. 139-147
  • Kamalinejad E., "An Optimal Transport Approach to Nonlinear Evolution Equations", 2012, University of Toronto PhD Thesis
  • AlNabulsiy S., Kamalinejad E., Meskasx J., Wang J., Yink K., Downton J., "Azimuthal Elastic Inversion for Fracture Characterization", 2012, IMA Preprint Series # 2399
  • Kamalinejad E., "Analysis of Natural Framing of Knots", 2007, Shahid Beheshti University, BSc Thesis

Publications

We worked on this project with Thomas Laurent and Kevin Costello at UC Riverside. In this project, we proposed the newly developed method of graph sparcification based on NI-forest sampling as a preprocessing step for L1 Cheeger cut clustering. We showed that this approach results in a competitively fast yet very accurate clustering algorithm. We presented this work at American Mathematical Society Sectional Conference. Here is the link to the presentation.

.

Projects

Graph Sparsification for L1 Clustering

We worked on this project with Fay Zhong, and David Haley. In this project, we proposed a new method of unsupervised classification based on geometric ideas borrowed from properties of smooth surfaces. The first phase of the project is done and resulted in a fast parallelizable algorithm that can handle overlapping clusters. The corresponding paper is submitted. We are working on the second phase of the project which deals with overlapping clusters.

A Geometric Approach to Unsupervised Classifications

While my go-to libraries for doing neural network computations are TensorFlow and Pytorch, there is a huge learning value in implementing these computations from scratch. This is a sample tutorial developed for my machine learning course. In this tutorial, we build a feed forward neural network from scratch. This tutorial shows all of the steps in building the network and training it through backpropagation from scratch. We also visualize the evolution of the learning process to help understand how it works. Here is the tutorial code in Python (Jupyter Notebook).

Multilayer Neural Networks with Visualization

Many of the new computation techniques such as parallel computation on GPU have been developed recently to speed up computations in deep learning. But these techniques are not limited to deep learning and one can use them to solve heavy computations problems in other fields. This is a tutorial from my mathematical modeling course. In this tutorial, we study all of the steps involved in developing a robust scientific model for 2D waves. The project studies efficient scientific modeling both from theoretical and practical perspectives. The implementation allows for the computations to be deployed on a cluster/GPU with up to 100X boost of speed compared to a cpu computations. This could be used as a guide for similar modeling problems with heavy computations.

Here is the tutorial code in Python (Jupyter Notebook).

Parallel Computation for Solving 2D Wave Equation

In this project, my students and I studied the problem of object recognition when the input has different modes (RGB and depth). In the first phase of the project, we used data captured from the Microsoft HoloLens headset to do a real-time face recognition based on the signals. One of the challenges of this project is limited computational power in the headset. Therefore, we needed to do neural network pruning  to get good results. This project was funded by National Science Foundation (NSF).

Multimodal Object Recognition

Deep learning models have great representation power. One can use deep learning embeddings in a geometric space to create very effective search engines. I co-founded a startup called Xoosha for visual search in the fashion domain. Using the latent embeddings of a convolutional network trained with a triplet loss, one can create a decent search method. But one can push this further by carefully designing deep learning architecture for the specific task with embeddings in the right space and using a custom loss. This can create stellar results. In addition to that, with an appropriate embedding method one can create a shared space for both visual and textual (and other types of) data. Searching in this space will be guided by both visual and textual content of an entity. To the best of our knowledge, the technology that we built in this project is unique. Here are a few samples. In sample 1, an image of a shoe is given, but rather than just searching for similar shoes, the model is searching for the items that are both similar to the given image visually and similar to the given expression (glitter) textually in the shared (visual + textual) space. One can see the huge potential of such technology in e-commerce.

Visual and Textual Search in a Unified Deep Learning Embedded Space