I am a quantitative risk professional with over a decade of experience in model development, validation, and governance across the banking, insurance, and academic research sectors. My expertise spans data science, model risk management, and predictive analytics, with a deep understanding of the model lifecycle and its alignment with regulatory frameworks such as SR 11-7, OCC 2011-12, and DFAST.
I have worked extensively on a wide array of models, including credit loss forecasting, stress testing, market risk, capital adequacy, and advanced machine learning applications for fraud detection and anti-money laundering (AML). Throughout my career, I have applied a broad range of statistical and machine learning techniques, including advanced time series analysis, Bayesian inference, optimization, simulation, and natural language processing.
I possess in-depth knowledge of big data analytics, data quality assessment, data validation, and model testing. My doctoral research focused on Bayesian deep learning to estimate critical credit risk metrics such as Probability of Default (PD), Loss Given Default (LGD), and Exposure at Default (EAD)—a novel approach to assessing model risk through the lens of aleatoric and epistemic uncertainty.
My core areas of expertise include machine learning, deep learning, reinforcement learning, econometrics, and financial risk management. I am proficient in a diverse set of tools and programming languages, including Python, R, SAS, STATA, JavaScript, LaTeX, and GIS. This technical versatility enables me to develop, deploy, and interpret complex models across various platforms while effectively communicating insights to both technical and non-technical stakeholders.