Irched chafaa

Logo


PhD. in Networks, Information and Communication sciences, Université Paris-Saclay, France.

IBM Data Science professional certificate.

View My LinkedIn Profile

View My GitHub Profile

PORTFOLIO

Selected projects in Machine Learning



Transformer model for flexible power optimization (Unsupervised Learning)

The transformer model is trained, in an unsupervised framework, to predict optimal Downlink powers for wireless networks, such that the minimum spectral efficiency (SE) is maximized. The transformer architecture allows to handle varying input sizes to handle the dynamic nature of wireless networks with varying user loads and active transmitters. The trained model provides the optimal powers without the need to have them during the training, which simplifies the data-gathering phase for the wireless system.




Transformer model for flexible power optimization (Supervised Learning)

The transformer model is trained to predict optimal powers (Uplink and Downlink jointly) for wireless networks using the positions of users and transmitters as input information. The transformer architecture allows to handle varying input sizes to handle the dynamic nature of wireless networks with varying user loads and active transmitters. The trained model provides optimal power performance, but with less computational complexity and input information compared to the traditional optimization methods.

View code on Github



Self-supervised deep learning for beam prediction in mmWave networks

Beam prediction in dynamic mmWave networks is a challenging task. In this project, self-supervised deep learning is leveraged to predict, based on sub-6 GHz channels data, mmWave beams in one access-point/receiver link and optimize the data rate. The trained neural network exploits the sub-6 GHz channel data at the input to output directly the beamforming vector for the mmWave band.

View code on Github


In a multi-link network, a federated learning scheme is proposed to predict the mmWave beamforming vectors locally, at each access-point. Federated learning consists in sharing local models with a server, which can aggregate them into a more informed global model for all links, leading to better performance without sharing their local data. In this project, we investigate both synchronous and asynchronous modes of uploading of the local models, and compare their performance to centralized learning.

View code on Github


SpaceX Falcon 9 first stage Landing Prediction

This project is the final project of the IBM Data Science professional certificate on Coursera. It exploits machine learning tools to predict if the Falcon 9 first stage will land successfully. The project involves collecting data, Exploratory Data Analysis, data vizualisation, feature engineering, SQL queries and prediction using different estimators. The machine learning pipeline allows to predict the sucess of the first stage landing, which reduces the launch cost.

View code on Github


Customers feedback sentiment analysis

In this project, I classify customers feedback from various sources such as social media, review platforms and testimonials into positive or negative by performing sentiment analysis.

View code on Github


Classification and Regression

This projects contains notebooks of regression and classification problems. The first notebook represents a classification problem to predict whether a patient is infected with covid-19 or not. The second notebook aims to predict a diamond price using tabular regression. It contains two main parts: EDA and price prediction.

View code on Github


Page template forked from evanca