Quantitative Finance
- [1] arXiv:2405.09747 [pdf, ps, html, other]
-
Title: NIFTY Financial News Headlines DatasetSubjects: Computational Finance (q-fin.CP); Machine Learning (cs.LG)
We introduce and make publicly available the NIFTY Financial News Headlines dataset, designed to facilitate and advance research in financial market forecasting using large language models (LLMs). This dataset comprises two distinct versions tailored for different modeling approaches: (i) NIFTY-LM, which targets supervised fine-tuning (SFT) of LLMs with an auto-regressive, causal language-modeling objective, and (ii) NIFTY-RL, formatted specifically for alignment methods (like reinforcement learning from human feedback (RLHF)) to align LLMs via rejection sampling and reward modeling. Each dataset version provides curated, high-quality data incorporating comprehensive metadata, market indices, and deduplicated financial news headlines systematically filtered and ranked to suit modern LLM frameworks. We also include experiments demonstrating some applications of the dataset in tasks like stock price movement and the role of LLM embeddings in information acquisition/richness. The NIFTY dataset along with utilities (like truncating prompt's context length systematically) are available on Hugging Face at this https URL.
- [2] arXiv:2405.09764 [pdf, ps, html, other]
-
Title: Clearing time randomization and transaction fees for auction market designComments: 30 pages, 11 figuresSubjects: Trading and Market Microstructure (q-fin.TR); Optimization and Control (math.OC)
Flaws of a continuous limit order book mechanism raise the question of whether a continuous trading session and a periodic auction session would bring better efficiency. This paper wants to go further in designing a periodic auction when both a continuous market and a periodic auction market are available to traders. In a periodic auction, we discover that a strategic trader could take advantage of the accumulated information available along the auction duration by arriving at the latest moment before the auction closes, increasing the price impact on the market. Such price impact moves the clearing price away from the efficient price and may disturb the efficiency of a periodic auction market. We thus propose and quantify the effect of two remedies to mitigate these flaws: randomizing the auction's closing time and optimally designing a transaction fees policy. Our results show that these policies encourage a strategic trader to send their orders earlier to enhance the efficiency of the auction market, illustrated by data extracted from Alphabet and Apple stocks.
- [3] arXiv:2405.09766 [pdf, ps, html, other]
-
Title: A note on continuity and consistency of measures of risk and variabilitySubjects: Risk Management (q-fin.RM)
In this short note, we show that every convex, order bounded above functional on a Banach lattice is automatically norm continuous. This improves a result in \cite{RS06} and applies to many deviation and variability measures. We also show that an order-continuous, law-invariant functional on an Orlicz space is strongly consistent everywhere, extending a result in \cite{KSZ14}.
- [4] arXiv:2405.09929 [pdf, ps, html, other]
-
Title: The $\kappa$-generalised Distribution for Stock ReturnsSubjects: Statistical Finance (q-fin.ST); Applications (stat.AP)
Empirical evidence shows stock returns are often heavy-tailed rather than normally distributed. The $\kappa$-generalised distribution, originated in the context of statistical physics by Kaniadakis, is characterised by the $\kappa$-exponential function that is asymptotically exponential for small values and asymptotically power law for large values. This proves to be a useful property and makes it a good candidate distribution for many types of quantities. In this paper we focus on fitting historic daily stock returns for the FTSE 100 and the top 100 Nasdaq stocks. Using a Monte-Carlo goodness of fit test there is evidence that the $\kappa$-generalised distribution is a good fit for a significant proportion of the 200 stock returns analysed.
- [5] arXiv:2405.09984 [pdf, ps, html, other]
-
Title: Mode of sustainable economic developmentSubjects: General Economics (econ.GN)
To implement the previously formulated principles of sustainable economic development, all non-negative solutions of the linear system of equations and inequalities, which are satisfied by the vector of real consumption, are completely described. It is established that the vector of real consumption with the minimum level of excess supply is determined by the solution of some quadratic programming problem. The necessary and sufficient conditions are established under which the economic system, described by the "input-output" production model, functions in the mode of sustainable development. A complete description of the equilibrium states for which markets are partially cleared in the economy model of production "input-output" is given, on the basis that all solutions of system of linear equations and inequalities are completely described. The existence of a family of taxation vectors in the "input-output" model of production, under which the economic system is able to function in the mode of sustainable development, is proved. Restrictions were found for the vector of taxation in the economic system, under which the economic system is able to function in the mode of sustainable development. The axioms of the aggregated description of the economy is proposed.
- [6] arXiv:2405.10231 [pdf, ps, html, other]
-
Title: Influencer CartelsSubjects: General Economics (econ.GN); Computers and Society (cs.CY); Machine Learning (cs.LG)
Social media influencers account for a growing share of marketing worldwide. We demonstrate the existence of a novel form of market failure in this advertising market: influencer cartels, where groups of influencers collude to increase their advertising revenue by inflating their engagement. Our theoretical model shows that influencer cartels can improve consumer welfare if they expand social media engagement to the target audience, or reduce welfare if they divert engagement to less relevant audiences. We validate the model empirically using novel data on influencer cartels combined with machine learning tools, and derive policy implications for how to maximize consumer welfare.
New submissions for Friday, 17 May 2024 (showing 6 of 6 entries )
- [7] arXiv:2208.01270 (replaced) [pdf, ps, html, other]
-
Title: Time Instability of the Fama-French Multifactor Models: An International EvidenceComments: 48 pages, 24 figures, 10 tableSubjects: Statistical Finance (q-fin.ST); General Economics (econ.GN); Pricing of Securities (q-fin.PR)
This paper investigates the time-varying structure of Fama and French's (1993; 2015) multi-factor models using Fama and MacBeth's (1973) two-step estimation based on the rolling window method. In particular, we employ the generalized GRS statistics proposed by Kamstra and Shi (2024) to examine whether the validity of the risk factors (or factor redundancy) in the FF3 and FF5 models remains stable over time, and investigate whether the manner of portfolio sorting affects the time stability of the validity of the risk factors. In addition, we examine whether the similar results are obtained even when we use different datasets by country and region. First, we find that the effectiveness of factors in the FF3 and FF5 models is not stable over time in all countries. Second, the effectiveness of factors is also affected by the manner of portfolio sorting. Third, the validity of the FF3, FF5, and their nested models do not remain stable over time except for Japan. This suggests that the efficient market hypothesis is supported in the Japanese stock market. Finally, the factor redundancy varies over time and is affected by the manner of portfolio sorting mainly in the U.S. and Europe.
- [8] arXiv:2302.02269 (replaced) [pdf, ps, html, other]
-
Title: A Modified CTGAN-Plus-Features Based Method for Optimal Asset AllocationComments: In figures 3 and 4, the labels "Synthetic'' and "Original'' were swapped. Now these figures have the correct labels. Results unchangedSubjects: Portfolio Management (q-fin.PM); Computational Engineering, Finance, and Science (cs.CE)
We propose a new approach to portfolio optimization that utilizes a unique combination of synthetic data generation and a CVaR-constraint. We formulate the portfolio optimization problem as an asset allocation problem in which each asset class is accessed through a passive (index) fund. The asset-class weights are determined by solving an optimization problem which includes a CVaR-constraint. The optimization is carried out by means of a Modified CTGAN algorithm which incorporates features (contextual information) and is used to generate synthetic return scenarios, which, in turn, are fed into the optimization engine. For contextual information we rely on several points along the U.S. Treasury yield curve. The merits of this approach are demonstrated with an example based on ten asset classes (covering stocks, bonds, and commodities) over a fourteen-and-half year period (January 2008-June 2022). We also show that the synthetic generation process is able to capture well the key characteristics of the original data, and the optimization scheme results in portfolios that exhibit satisfactory out-of-sample performance. We also show that this approach outperforms the conventional equal-weights (1/N) asset allocation strategy and other optimization formulations based on historical data only.
- [9] arXiv:2303.07158 (replaced) [pdf, ps, html, other]
-
Title: Uniform Pessimistic Risk and Optimal PortfolioSubjects: Portfolio Management (q-fin.PM); Machine Learning (cs.LG); Computation (stat.CO); Machine Learning (stat.ML)
The optimal allocation of assets has been widely discussed with the theoretical analysis of risk measures, and pessimism is one of the most attractive approaches beyond the conventional optimal portfolio model. The $\alpha$-risk plays a crucial role in deriving a broad class of pessimistic optimal portfolios. However, estimating an optimal portfolio assessed by a pessimistic risk is still challenging due to the absence of a computationally tractable model. In this study, we propose an integral of $\alpha$-risk called the \textit{uniform pessimistic risk} and the computational algorithm to obtain an optimal portfolio based on the risk. Further, we investigate the theoretical properties of the proposed risk in view of three different approaches: multiple quantile regression, the proper scoring rule, and distributionally robust optimization. Real data analysis of three stock datasets (S\&P500, CSI500, KOSPI200) demonstrates the usefulness of the proposed risk and portfolio model.
- [10] arXiv:2305.10849 (replaced) [pdf, ps, html, other]
-
Title: Extreme ATM skew in a local volatility model with discontinuity: joint density approachComments: To appear in Finance and StochasticsSubjects: Mathematical Finance (q-fin.MF); Probability (math.PR)
This paper concerns a local volatility model in which volatility takes two possible values, and the specific value depends on whether the underlying price is above or below a given threshold value. The model is known, and a number of results have been obtained for it. In particular, option pricing formulas and a power law behaviour of the implied volatility skew have been established in the case when the threshold is taken at the money. In this paper we derive an alternative representation of option pricing formulas. In addition, we obtain an approximation of option prices by the corresponding Black-Scholes prices. Using this approximation streamlines obtaining the aforementioned behaviour of the skew. Our approach is based on the natural relationship of the model with Skew Brownian motion and consists of the systematic use of the joint distribution of this stochastic process and some of its functionals.
- [11] arXiv:2401.09054 (replaced) [pdf, ps, html, other]
-
Title: On conditioning and consistency for nonlinear functionalsSubjects: Mathematical Finance (q-fin.MF); Theoretical Economics (econ.TH)
We consider a family of conditional nonlinear expectations defined on the space of bounded random variables and indexed by the class of all the sub-sigma-algebras of a given underlying sigma-algebra. We show that if this family satisfies a natural consistency property, then it collapses to a conditional certainty equivalent defined in terms of a state-dependent utility function. This result is obtained by embedding our problem in a decision theoretical framework and providing a new characterization of the Sure-Thing Principle. In particular we prove that this principle characterizes those preference relations which admit consistent backward conditional projections. We build our analysis on state-dependent preferences for a general state space as in Wakker and Zank (1999) and show that their numerical representation admits a continuous version of the state-dependent utility. In this way, we also answer positively to a conjecture posed in the aforementioned paper.
- [12] arXiv:2404.11745 (replaced) [pdf, ps, html, other]
-
Title: Piercing the Veil of TVL: DeFi ReappraisedSubjects: General Finance (q-fin.GN)
Total value locked (TVL) is widely used to measure the size and popularity of decentralized finance (DeFi). However, the prevalent TVL calculation framework suffers from a "double counting" issue that results in an inflated metric. We find existing methodologies addressing double counting either inconsistent or flawed. To solve this, we formalize the TVL framework and propose a new framework, total value redeemable (TVR), to accurately assess the true value within DeFi. TVL formalization reveals how DeFi's complex network spreads financial contagion via derivative tokens, amplifying liquidations and stablecoin depegging in market downturns. We construct the DeFi multiplier to quantify the double counting, which mirrors the money multiplier in traditional finance (TradFi). Our measurement finds substantial double counting in DeFi. During the peak of DeFi activity on December 2, 2021, the difference between TVL and TVR was \$139.87 billion, with a TVL-to-TVR ratio of about 2. We further show that TVR is a more stable metric than TVL, especially during market downturns. A 25% decline in the price of Ether (ETH) leads to a \$1 billion greater non-linear decrease in TVL compared to TVR. We also observe a substitution effect between TradFi and DeFi.
- [13] arXiv:2405.03496 (replaced) [pdf, ps, html, other]
-
Title: Price-Aware Automated Market Makers: Models Beyond Brownian Prices and Static LiquiditySubjects: Trading and Market Microstructure (q-fin.TR)
In this paper, we introduce a suite of models for price-aware automated market making platforms willing to optimize their quotes. These models incorporate advanced price dynamics, including stochastic volatility, jumps, and microstructural price models based on Hawkes processes. Additionally, we address the variability in demand from liquidity takers through models that employ either Hawkes or Markov-modulated Poisson processes. Each model is analyzed with particular emphasis placed on the complexity of the numerical methods required to compute optimal quotes.