Researcher:
Ali, Özden Gür

Loading...
Profile Picture
ORCID

Job Title

Faculty Member

First Name

Özden Gür

Last Name

Ali

Name

Name Variants

Ali, Özden Gür
Ali, Fatma Özden

Email Address

Birth Date

Search Results

Now showing 1 - 10 of 13
  • Placeholder
    Publication
    Targeting resources efficiently and justifiably by combining causal machine learning and theory
    (Frontiers Media Sa, 2022) Department of Business Administration; Ali, Özden Gür; Faculty Member; Department of Business Administration; College of Administrative Sciences and Economics; 57780
    Introduction: Efficient allocation of limited resources relies on accurate estimates of potential incremental benefits for each candidate. these heterogeneous treatment effects (HTE) can be estimated with properly specified theory-driven models and observational data that contain all confounders. Using causal machine learning to estimate HTE from big data offers higher benefits with limited resources by identifying additional heterogeneity dimensions and fitting arbitrary functional forms and interactions, but decisions based on black-box models are not justifiable. MethodsOur solution is designed to increase resource allocation efficiency, enhance the understanding of the treatment effects, and increase the acceptance of the resulting decisions with a rationale that is in line with existing theory. the case study identifies the right individuals to incentivize for increasing their physical activity to maximize the population's health benefits due to reduced diabetes and heart disease prevalence. We leverage large-scale data from multi-wave nationally representative health surveys and theory from the published global meta-analysis results. We train causal machine learning ensembles, extract the heterogeneity dimensions of the treatment effect, sign, and monotonicity of its moderators with explainable aI, and incorporate them into the theory-driven model with our generalized linear model with the qualitative constraint (GLM_QC) method. Resultsthe results show that the proposed methodology improves the expected health benefits for diabetes by 11% and for heart disease by 9% compared to the traditional approach of using the model specification from the literature and estimating the model with large-scale data. Qualitative constraints not only prevent counter-intuitive effects but also improve achieved benefits by regularizing the model.
  • Placeholder
    Publication
    How removing prescription drugs from reimbursement lists increases the pharmaceutical expenditures for alternatives
    (Springer, 2011) Department of Business Administration; N/A; Ali, Özden Gür; Topaler, Başak; Faculty Member; Master Student; Department of Business Administration; College of Administrative Sciences and Economics; Graduate School of Business; 57780; N/A
    Changing the status of drugs from prescription-only to over-the-counter and removing them from reimbursement list has been used as a cost reduction measure by several third-party payers. in June 2006, the Turkish government, in an effort to curtail costs, removed many prescription drugs from the reimbursement list. This paper examines the effect of this policy on the expenditures for drugs that were removed from the reimbursement list and for their reimbursable alternatives that can be prescribed by physicians on patient request. To accomplish these goals, Actual expenditures in four anatomical therapeutic chemical (ATC) groups were compared with expected expenditures in the absence of policy change for both removed and alternative drugs. the findings indicated that the expenditures on alternative drugs beyond expectations. in two of the four ATC groups involved in the study, the increase was large enough to wipe out the reduction in expenditures on the drugs removed from the reimbursement list.
  • Placeholder
    Publication
    Dynamic churn prediction framework with more effective use of rare event data: the case of private banking
    (Pergamon-Elsevier Science Ltd, 2014) Department of Business Administration; N/A; Ali, Özden Gür; Arıtürk, Umut; Faculty Member; PhD Student; Department of Business Administration; College of Administrative Sciences and Economics; Graduate School of Business; 57780; N/A
    Customer churn prediction literature has been limited to modeling churn in the next (feasible) time period. On the other hand, lead time specific churn predictions can help businesses to allocate retention efforts across time, as well as customers, and identify early triggers and indicators of customer churn. We propose a dynamic churn prediction framework for generating training data from customer records, and leverage it for predicting customer churn within multiple horizons using standard classifiers. Further, we empirically evaluate the proposed approach in a case study about private banking customers in a European bank. The proposed framework includes customer observations from different time periods, and thus addresses the absolute rarity issue that is relevant for the most valuable customer segment of many companies. It also increases the sampling density in the training data and allows the models to generalize across behaviors in different time periods while incorporating the impact of the environmental drivers. As a result, this framework significantly increases the prediction accuracy across prediction horizons compared to the standard approach of one observation per customer; even when the standard approach is modified with oversampling to balance the data, or lags of customer behavior features are added as additional predictors. The proposed approach to dynamic churn prediction involves a set of independently trained horizon-specific binary classifiers that use the proposed dataset generation framework. In the absence of predictive dynamic churn models, we had to benchmark survival analysis which is used predominantly as a descriptive tool. The proposed method outperforms survival analysis in terms of predictive accuracy for all lead times, with a much lower variability. Further, unlike Cox regression, it provides horizon specific ranking of customers in terms of churn probability which allows allocation of retention efforts across customers and time periods. (C) 2014 Elsevier Ltd. All rights reserved.
  • Placeholder
    Publication
    Pharma rebates, pharmacy benefit managers and employer outcomes
    (Springer, 2010) Mantrala, Murali; Department of Business Administration; Ali, Özden Gür; Faculty Member; Department of Business Administration; College of Administrative Sciences and Economics; 57780
    Corporate employers contract with pharmacy benefit managers (PBMs) with the goals of lowering their employee prescription drug coverage costs while maintaining health care quality. However, little is known about how employer-PBM contract elements and brand drugmakers' rebates combine to influence a profit-maximizing PBM's actions, and the impact of those actions on the employer's outcomes. To shed more light on these issues, the authors build and analyze a mathematical simulation model of a competitive pharmaceutical market comprised of one generic and two branded drugs, and involving a PBM contracted by a corporate employer to help it lower prescription drug costs while achieving a minimum desired quality of health care for its employees. The brand drugmakers' rebate offers, the PBM's assignment of drugs to formulary tiers, and the resulting employer outcomes under varying contracts and pharma brand marketing mix environmental scenarios are analyzed to provide insights. The findings include that the pharma brands offer rebates for the PBM's ability to move prescription share away from the unpreferred brand, but reduce these offers when the PBM's contract requires it to proactively influence physicians to prescribe the generic drug alternative. Further, Pareto optimal contracts that provide the highest health benefit for a given employer cost budget for the employer are analyzed to provide managerial implications. They are found to involve strong PBM influence on physician prescribing to discourage unpreferred brands, as well as high patient copayment requirements for unpreferred brands to align the patient prescription fill probability with the formulary, while other copayment requirements provide an instrument to determine the level of desired health benefit-cost tradeoff.
  • Placeholder
    Publication
    Evaluating average and heterogeneous treatment effects in light of domain knowledge: impact of behaviors on disease prevalence
    (Ieee, 2019) N/A; Department of Business Administration; Ghanem, Angi Nazih; Ali, Özden Gür; N/A; Faculty Member; Department of Business Administration; N/A; College of Administrative Sciences and Economics; N/A; 57780
    Understanding causal treatment effect and its heterogeneity can improve targeting of efforts for prevention and treatment of diseases. A number of methods are emerging to estimate heterogeneous treatment effect from observational data, such as Causal Forest. In this paper, we evaluate the heterogeneous treatment effect estimates in terms of whether they recover the expected direction of the effect based on domain knowledge. We use the individual level health surveys conducted by the Turkish Statistical Institute (TUIK) over the span of eight years with 90K+ observations. We estimate the effect of six behaviors on the probability of two diseases (IHD and Diabetes). We compare two approaches: a) treatment and disease specific Causal Forest models that directly estimate the heterogeneous treatment effect, and b) disease specific Random Forest models of disease probability that are used as simulators to evaluate counterfactual scenarios. We find that, with some exceptions, the signs of Causal Forest heterogeneous treatment effects are aligned with domain knowledge. Causal Forest performed better than the more naive approach of using RF models as simulators which disregards selection bias in treatment assignment.
  • Placeholder
    Publication
    A new gravity model with variable distance decay
    (Vilnius Gediminas Technical Univ Press, Technika, 2008) N/A; Department of Business Administration; Department of Business Administration; Sandıkçıoğlu, Müge; Ali, Özden Gür; Sayın, Serpil; Master Student; Faculty Member; Faculty Member; Department of Business Administration; Graduate School of Sciences and Engineering; College of Administrative Sciences and Economics; College of Administrative Sciences and Economics; N/A; 57780; 6755
    Our main goal is to understand the customers' store choice behavior in a grocery retail setting. We see this as a first vital step in order to make store location, format and product promotion decisions in the retail organization Proposed models in the literature generate consumer utility functions for different stores which are used in store sales estimation. For example, in one of its basic forms, Huff model proposes that, utility of a store for an individual is equal to the sales area of the store divided by a power of the individual's distance to the store. Parallel to this stream of research Multiplicative Competitor Interaction model estimates log-transformed utility functions by ordinary least squares regression. It is less specific in terms of variable selection compared to the Huff model. This paper proposes a new market share model which is a variant of the Huff model and evaluates most established market share models such as Huff and Multiplicative Competitor Interaction Model as well as a data mining method in a one-brand heterogonous size retail store setting. We observe that the Huff model performs well in its basic form. By representing distance decay value as a function of the sales area of the retail store we are able to improve the performance of the Huff model. We propose using optimization for estimating the model parameters in certain cases and observe that this improves the generalization ability of the model.
  • Placeholder
    Publication
    Multi-period-ahead forecasting with residual extrapolation and information sharing - utilizing a multitude of retail series
    (Elsevier, 2016) Department of Business Administration; N/A; Ali, Özden Gür; Pınar, Efe; Faculty Member; Master Student; Department of Business Administration; College of Administrative Sciences and Economics; Graduate School of Sciences and Engineering; 57780; N/A
    Multi-period sales forecasts are important inputs for operations at retail chains with hundreds of stores, and many different formats, customer segments and categories. In addition to the effects of seasonality, holidays and marketing, correlated random disturbances also affect sales across stores that share common characteristics. We propose a novel method, Two-Stage Information Sharing that takes advantage of this challenging complexity. In this method, segment-specific panel regressions with seasonality and marketing variables pool the data, in order to provide better parameter estimates. The residuals are then extrapolated non-parametrically using features that are constructed from the last twelve months of observations from the focal and related category-store time series. The final forecast combines the extrapolated residuals with the forecasts from the first stage. Working with the extensive dataset of a leading Turkish retailer, we show that this method significantly outperforms both panel regression models (mixed model) with an AR(1) error structure and the autoregressive distributed lags (ADL) model, as well as the univariate exponential smoothing (Winters') method. The further out the prediction, the greater the improvement. (C) 2015 International Institute of Forecasters. Published by Elsevier B.V. All rights reserved.
  • Placeholder
    Publication
    Cross-selling investment products with a win-win perspective in portfolio optimization
    (Informs, 2017) Özçelik, M. Hamdi; Department of Business Administration; Department of Business Administration; Department of Business Administration; N/A; Ali, Özden Gür; Akçay, Yalçın; Sayman, Serdar; Yılmaz, Emrah; Faculty Member; Faculty Member; Faculty Member; PhD Student; Department of Business Administration; College of Administrative Sciences and Economics; College of Administrative Sciences and Economics; College of Administrative Sciences and Economics; Graduate School of Business; 57780; 51400; 112222; N/A
    We propose a novel approach to cross-selling investment products that considers both the customers' and the bank's interests. Our goal is to improve the risk-return profile of the customer's portfolio and the bank's profitability concurrently, essentially creating a win-win situation, while deepening the relationship with an acceptable product. Our cross-selling approach takes the customer's status quo bias into account by starting from the existing customer portfolio, rather than forming an efficient portfolio from scratch. We estimate a customer's probability of accepting a product offer with a predictive model using readily available data. Then, we model the investment product cross-selling problem as a nonlinear mixed-integer program that maximizes a customer's expected return from the proposed portfolio, while ensuring that the bank's profitability improves by a certain factor. We implemented our methodology at the private banking division of Yapi Kredi, the fourth-largest private bank in Turkey. Empirical results from this application illustrate that (1) a traditional mean-variance portfolio optimization approach does not increase portfolio returns and reduces overall bank profits, (2) a standard cross-selling approach increases bank profits at the expense of the customers' portfolio returns, and (3) our win-win approach increases the expected portfolio returns of customers without increasing their variances, while simultaneously improving bank profits substantially.
  • Placeholder
    Publication
    SKU demand forecasting in the presence of promotions
    (Elsevier, 2009) van Woensel, Tom; Fransoo, Jan; Department of Business Administration; Department of Business Administration; Ali, Özden Gür; Sayın, Serpil; Faculty Member; Faculty Member; Department of Business Administration; College of Administrative Sciences and Economics; College of Administrative Sciences and Economics; 57780; 6755
    Promotions and shorter life cycles make grocery sales forecasting more difficult, requiring more complicated models. We identify methods of increasing complexity and data preparation cost yielding increasing improvements in forecasting accuracy, by varying the forecasting technique, the input features and model scope on an extensive SKU-store level sales and promotion time series from a European grocery retailer. At the high end of data and technique complexity, we propose using regression trees with explicit features constructed from sales and promotion time series of the focal and related SKU-store combinations. We observe that data pooling almost always improves model performance. The results indicate that simple time series techniques perform very well for periods without promotions. However, for periods with promotions, regression trees with explicit features improve accuracy substantially. More sophisticated input is only beneficial when advanced techniques are used. We believe that our approach and findings shed light into certain questions that arise while building a grocery sales forecasting system. (C) 2009 Elsevier Ltd. All rights reserved.
  • Placeholder
    Publication
    Selecting rows and columns for training support vector regression models with large retail datasets
    (Elsevier, 2013) Department of Business Administration; N/A; Ali, Özden Gür; Yaman, Kübra; Faculty Member; Master Student; Department of Business Administration; College of Administrative Sciences and Economics; Graduate School of Sciences and Engineering; 57780; N/A
    Although support vector regression models are being used successfully in various applications, the size of the business datasets with millions of observations and thousands of variables makes training them difficult, if not impossible to solve. This paper introduces the Row and Column Selection Algorithm (ROCSA) to select a small but informative dataset for training support vector regression models with standard SVM tools. ROCSA uses epsilon-SVR models with L-1-norm regularization of the dual and primal variables for the row and column selection steps, respectively. The first step involves parallel processing of data chunks and selects a fraction of the original observations that are either representative of the pattern identified in the chunk, or represent those observations that do not fit the identified pattern. The column selection step dramatically reduces the number of variables and the multicolinearity in the dataset, increasing the interpretability of the resulting models and their ease of maintenance. Evaluated on six retail datasets from two countries and a publicly available research dataset, the reduced ROCSA training data improves the predictive accuracy on average by 39% compared with the original dataset when trained with standard SVM tools. Comparison with the epsilon SSVR method using reduced kernel technique shows similar performance improvement. Training a standard SVM tool with the ROCSA selected observations improves the predictive accuracy on average by 21% compared to the practical approach of random sampling. (C) 2012 Elsevier B.V. All rights reserved.