Nathan Burton
Senior Quantitative Researcher
Senior Quantitative Researcher
Though perhaps not discussed as often as price prediction, understanding volatility of asset prices is fundamental to sophisticated DeFi and will only become more crucial as DeFi continues to mature. Models that specifically provide predictive forecasts and quantitative measures of asset volatility can be useful for mean-variance portfolio optimization, risk analysis, options and derivatives pricing, and more. In the realm of DeFi, volatility forecasts can be similarly applied to unique use-cases including AMM fee mechanisms, lending pool parameter optimization, optimized liquidity provisioning, yield farming strategies…etc.
OpenGradient has investigated volatility forecasting of cryptoassets using ML models with promising results. For example, one hour ETH/USDT volatility forecasts from our in-house model produce correlations between forecast and true volatility of ρ>0.8 in out-of-sample test sets. In this post we discuss three specific use cases for volatility forecasting along with our model and results.
Volatility refers to the amount of variability, especially associated with unpredictable changes. In this paper we are generally referring to volatility in price/exchange rate of an asset. There are a number of ways in which volatility can be numerically measured, including standard deviation or variance of asset returns, standard deviation of price, high-minus-low, bipower variation, variations of the previous mentions, etc. Standard deviation of return is the most commonly used to describe price volatility. This does require a choice in resolution, and often timeframe, for example one might take the set of one-second returns and calculate the standard deviation of those over the course of 1 minute, or perhaps one might attempt to estimate the standard deviation of one-day returns. Note that there is a relationship between long-term volatility and average short-term volatility over the period, which can be helpful if used properly.
In nearly all actionable use cases of volatility the key is understanding something about volatility in the future. Attempting to balance a portfolio of assets to meet volatility requirements is based on expectations of future volatility, and prior volatility is only useful insofar as it helps in forecasting future volatility. Volatility forecasting then becomes the tool of greatest interest. Though often current volatility is used as the expectation of future volatility, this method of forecasting is relatively unsophisticated and other forecasting methods, including machine learned models, are demonstrably more accurate. In the example ML model described in this paper we forecast standard deviation of 1-minute returns over the course of an hour, although we continue to work on various models over varying timeframes depending on application.
By understanding, quantifying, and managing volatility, investors, developers, and market participants can ultimately foster a more stable and efficient digital financial ecosystem that supports sustainable lending practices, responsible leverage, and the long-term viability of DeFi infrastructure.
In the realm of portfolio management, volatility is not merely a source of risk but also a metric for dynamic asset allocation and strategic rebalancing. It is common in portfolio management to adjust a portfolio where assets expected to exhibit high price fluctuation receive smaller weight to manage overall risk to the appropriate level, which may be high or low. As more and more derivative products become available in DeFi one may make greater or lesser use of these for hedging during periods of anticipated turbulence. This approach may prove particularly beneficial for long-term portfolios, as it allows risk managers to both preserve capital and seek to enhance returns by capitalizing on cyclical volatility patterns.
Returns forecasts could be used in tandem with volatility forecasting to help inform and evaluate portfolio allocation decisions. OpenGradient has created example spot forecast models, and continues to explore the space, including research in ensemble models that marry return forecast with risk/volatility forecasting, and in portfolio managing agents that can evaluate risks using ML volatility models.
Many protocol types are inherently exposed to risk associated with price volatility. Vaults or yield farming protocols are meant to be relatively low risk, however high volatility can lead to loss in various ways including total loss of value or exposure when engaging in transactions. Whether the protocol makes use of static algorithms or agents, risk can be managed by decreasing exposure to volatile assets or reacting in defensive ways like a freeze on transactions when high volatility is expected to occur. OpenGradient is currently working with a partner (details to be released soon) with a vault AI agent that will use volatility based risk management models to reduce risk for the holder.
For market-makers, volatility directly influences both the pricing of assets and the liquidity they commit to markets. In high-volatility environments, a market maker might either broaden bid-ask spreads on a centralized exchange or increase fees in an AMM protocol or direct liquidity provision to compensate for the increased probability of adverse selection, impermanent loss, and LVR.
OpenGradient's research on AMMs and dynamic fees based on a volatility related model can be found in this post. In the paper we make use of a model designed specifically for AMM fee setting, yet the model was designed around the concept of volatility forecasting, providing sound evidence for the effective use of ML in volatility forecasting for LPs.
Additionally, OpenGradient has begun working with partners building AI agents that simplify the process for a person to gain yields from their inactive tokens through LPing, and the agent can decide whether the risk of impermanent loss in the AMM pool is too high via an ML volatility model.
Volatility must play a central role in designing lending parameters and managing collateral risk within decentralized lending protocols. Though many lending protocols rely on static collateral or interest rates, this is highly inefficient and could be improved with forecasting models. A decentralized lending platform could respond to volatility forecasts by optimizing collateral ratios, increasing fees, or implementing more frequent price updates to mitigate liquidation cascades. By incorporating volatility into their risk frameworks, lending protocols can reduce the probability of collateral shortfalls in high volatility while remaining competitive and attracting more borrowers in times of low volatility, which is even more effective and efficient with reasonable forecasting.
OpenGradient is currently developing models based in volatility forecasting to adjust collateral requirements for lending protocols in order to achieve what has been discussed above. Additionally, OpenGradient is working with partners for volatility based models that assist a borrower, specifically via an AI agent, in deciding whether to put up collateral to borrow or if the risk of having their collateral liquidated is too high and so to avoid borrowing.
Volatility has long played a crucial role in derivatives pricing by providing estimates of future price fluctuations in the underlying asset. By incorporating time-series forecasting models like GARCH (Generalized Autoregressive Conditional Heteroskedasticity) or other volatility forecasting models, traders can better estimate the volatility, leading to more accurate pricing of derivatives like European options which implement volatility for pricing via methods as Black-Scholes-Merton (show above where σ is standard deviation) or Heston models.
These forecasts help capture the dynamic nature of market volatility, including volatility clustering and mean reversion effects, which are particularly important during periods of market stress when standard Black-Scholes assumptions may break down. This more nuanced approach to volatility estimation can lead to better risk management and potentially identify mispriced options in the market, especially for longer-dated contracts where the accuracy of volatility forecasts becomes increasingly important.
Our 1 hour ETH/USDT volatility forecast model uses 10 prior 30-minute candles (5 hours total) as inputs. These candles are are then log-transformed and fed into the model which was MSE optimized for a target of standard deviation of 1-minute returns over the next hour, i.e. we are trying to predict where
The 30 minute candles allow the input to account for recent returns over an extended period via open and close, along with reasonably volatility approximation via high and low, while still maintaining a limited number of features ( features). This methodology of using lower-granularity candles to predict higher granularity standard deviation produced the following results against our n=6146 test set:
correlation: 0.85
R2: 0.56
MSE: 8.6⨉10-8
as compared to simply using the past hour's volatility to predict the next hour's volatility (same test set):
correlation: 0.60
R2: 0.20
MSE: 24.8⨉10-8
As can be seen in the plots below, the simple method of using the past hour volatility to predict the next hour (right plot) is not bad per se, especially for a financial model, however the ML forecast on the left is notably tighter as expected from the statistics above.
The capacity to accurately forecast volatility is critical in advancing the sophistication and resiliency of decentralized finance. Preliminary modeling efforts and use-case demonstrations clearly highlight that volatility forecasting is not a peripheral consideration, but should be integrated as a core analytical component across the DeFi landscape. Machine learning's improvements over naive approaches are both statistically and practically significant. As DeFi expands and matures, sophisticated volatility forecasting tools will be a critical tool for the use-cases outlined above.
As you consider how you can use volatility forecasting to improve protocols, product and offerings, OpenGradient offers the ability to help you evaluate, design, build, and host models that infer on-chain. If you have any questions, ideas, or just want to discuss, feel free to reach out!
OpenGradient is a leading decentralized AI platform for open-source model hosting, secure inference, agentic reasoning, and application deployment. By developing tooling and a feature-rich platform that makes developing AI workflows both secure and seamless, OpenGradient empowers developers with the ability to build in our ecosystem of intelligent and optimized AI-empowered applications. With native model hosting, permissionless composability, and secure inference execution, OpenGradient also aims to accelerate open-source AI by democratizing model ownership, improving verifiability guarantees, and promoting censorship-resistant model access.
Current live products and initiatives include:
Research - In-house research team focused on developing AI and ML models as open-sourced public goods for Web3 protocols.
Model Hub - A Web3 model repository like HuggingFace, featuring a web UI built on a completely decentralized backend including OpenGradient’s decentralized filestore and blockchain infrastructure.
OpenGradient SDK - The OpenGradient SDK allows programmatic access to our model filestore and decentralized AI infrastructure from Python or from their CLI. Model developers can use the SDK to publish and manage their model deployments on OpenGradient and integrate it directly into their AI workflows.
NeuroML - Solidity library used to expose functions that are important for running AI-driven workloads on our EVM network, e.g. Data preprocessing, data post-processing, statistical analysis, different flavors of inference…etc.
Blockchain EVM Network - Blockchain Network leveraging heterogeneous AI compute architecture (HACA) and node specialization to support features like model hosting, data access, and AI inference secure and scalable.