What are autoregressive models?

Autoregressive models are a class of machine learning (ML) models that automatically predict the next component in a sequence by taking measurements from previous inputs in the sequence. Autoregression is a statistical technique used in time-series analysis that assumes that the current value of a time series is a function of its past values. Autoregressive models use similar mathematical techniques to determine the probabilistic correlation between elements in a sequence. They then use the knowledge derived to guess the next element in an unknown sequence. For example, during training, an autoregressive model processes several English language sentences and identifies that the word “is” always follows the word “there.” It then generates a new sequence that has “there is” together.

How are autoregressive models used in generative AI?

Generative artificial intelligence (generative AI) is an advanced data science technology capable of creating new and unique content by learning from massive training data. The following sections describe how autoregressive modeling enables generative AI applications. 

Natural language processing (NLP)

Autoregressive modeling is an important component of large language models (LLMs). LLMs are powered by the generative pre-trained transformer (GPT), a deep neural network derived from the transformer architecture. The transformer consists of an encoder-decoder, which enables natural language understanding and natural language generation, respectively. The GPT uses only the decoder for autoregressive language modeling. This allows GPT to understand natural languages and respond in ways humans comprehend. A GPT-powered large language model predicts the next word by considering the probability distribution of the text corpus it is trained on.

Read about Natural Language Processing (NLP)

Image synthesis

Autoregression allows deep learning models to generate images by analyzing limited information. Image processing neural networks like PixelRNN and PixelCNN use autoregressive modeling to predict visual data by examining existing pixel information. You can use autoregressive techniques to sharpen, upscale, and reconstruct images while maintaining quality. 

Time-series prediction 

Autoregressive models are helpful in predicting the likelihood of time-series events. For example, deep learning models use autoregressive techniques for forecasting stock prices, weather, and traffic conditions based on historical values. 

Data augmentation 

ML engineers train AI models with curated datasets to improve performance. In some cases, there is insufficient data to train the model adequately. Engineers use autoregressive models to generate new and realistic deep learning training data. They use the generated data to augment existing limited training datasets.

How does autoregressive modeling work?

An autoregressive model uses a variation of linear regression analysis to predict the next sequence from a given range of variables. In regression analysis, the statistical model is provided with several independent variables, which it uses to predict the value of a dependent variable. 

Linear regression

You can imagine linear regression as drawing a straight line that best represents the average values distributed on a two-dimensional graph. From the straight line, the model generates a new data point corresponding to the conditional distribution of historical values. 

Consider the simplest form of the line graph equation between y (dependent variable) and x (independent variable); y=c*x+m, where c and m are constant for all possible values of x and y. So, for example, if the input dataset for (x,y) was (1,5), (2,8), and (3,11). To identify the linear regression method, you would use the following steps:

  1. Plot a straight line and measure the correlation between 1 and 5.
  2. Change the straight line direction for new values (2,8) and (3,11) until all values fit.
  3. Identify the linear regression equation as y=3*x+2.
  4. Extrapolate or predict that y is 14 when x is 4.

Autoregression

Autoregressive models apply linear regression with lagged variables of its output taken from previous steps. Unlike linear regression, the autoregressive model doesn’t use other independent variables except the previously predicted results. Consider the following formula. 

When expressed in the probabilistic term, an autoregressive model distributes independent variables over n-possible steps, assuming that earlier variables conditionally influence the outcome of the next one. 

We can also express autoregressive modeling with the equation below. 

Here, y is the prediction outcome of multiple orders of previous results multiplied by their respective coefficients, ϕ. The coefficient represents weights or parameters influencing the predictor’s importance to the new result. The formula also considers random noise that may affect the prediction, indicating that the model is not ideal and further improvement is possible.  

Lag

Data scientists add more lagged values to improve autoregressive modeling accuracy. They do so by increasing the value of t, which denotes the number of steps in the time series of data. A higher number of steps allows the model to capture more past predictions as input. For example, you can expand an autoregressive model to include the predicted temperature from 7 days to the past 14 days to get a more accurate outcome. That said, increasing the lagged order of an autoregressive model does not always result in improved accuracy. If the coefficient is close to zero, the particular predictor has little influence on the result of the model. Moreover, indefinitely expanding the sequence results in a more complex model requiring more computing resources to run.

What is autocorrelation?

Autocorrelation is a statistical method that evaluates how strongly the output of an autoregressive model is influenced by its lagged variables. Data scientists use autocorrelation to describe the relationship between the output and lagged inputs of a model. The higher the correlation, the higher the prediction accuracy of the model. The following are some considerations with autocorrelation:

  • A positive correlation means that the output follows the trends charted in the previous values. For example, the model predicts that the stock price will increase today because it has increased for the past few days.
  • A negative correlation means that the output variable heads opposite to previous results. For example, the autoregressive system observes that the past few days were raining but predicted a sunny day tomorrow.
  • Zero correlation might indicate a lack of specific patterns between input and output.

Data engineers use autocorrelation to determine how many steps they should include in the model to optimize computing resources and response accuracy. In some applications, the autoregressive model might show strong autocorrelation when using variables from the immediate past but weaker autocorrelation for distant inputs. For example, engineers found that an autoregressive weather predictor is less sensitive to past predictions from over 30 days. So, they revised the model to only include lagged results from the past 30 days. This led to more accurate results using fewer computing resources. 

What is the difference between autoregression and other types of regressive analysis techniques?

Apart from autoregression, several regressive techniques have been introduced to analyze variables and their interdependencies. The following sections describe the differences. 

Linear regression compared with autoregression

Both regression methods assume that past variables share a linear relationship with future values. Linear regression predicts an outcome based on several independent variables within the same timeframe. Meanwhile, autoregression uses only one variable type but expands it over several points to predict the future outcome. For example, you use linear regression to predict your commute time based on weather, traffic volume, and walking speed. Alternately, an autoregression model uses your past commute times to estimate the arrival time for today.

Polynomial regression compared with autoregression

Polynomial regression is a statistical method that captures the relationship of non-linear variables. Some variables can’t be linearly represented by a straight line and require additional polynomial terms to better reflect their relationships. For example, engineers use polynomial regression to analyze employee earnings based on their education level. Meanwhile, autoregression is suitable for predicting future income of an employee based on their previous salaries. 

Logistic regression compared with autoregression

Logistic regression allows a statistical model to predict the likelihood of a specific event in the probabilistic term. It expresses the prediction outcome in percentage instead of a range of numbers. For example, business analysts use a logistic regression model to predict an 85 percent chance of supply cost increment in the following month. Conversely, the autoregression model predicts the probable inventory price given its historical prediction for previous months. 

Ridge regression compared with autoregression

Ridge regression is a variant of linear regression that allows the coefficient of a model to be restricted. Data scientists can adjust a penalty factor, compensating for the influence of the coefficient in modeling the outcome. The parameter coefficient can be suppressed to near zero in a ridge regression model. This is helpful when the regressive algorithm is prone to overfitting. Overfitting is a condition where the model can generalize well with training data but not unfamiliar real-world data. An autoregression model, meanwhile, does not have a coefficient penalty mechanism. 

Lasso regression compared with autoregression

Lasso regression is similar to ridge regression, which can restrict the variable coefficient with a penalty factor. However, lasso regression can suppress the coefficient to zero. This allows data scientists to simplify complex models by ignoring non-critical parameters. Meanwhile, autoregressive models don’t regulate their predictions with coefficient shrinkage.

How can AWS help with your autoregressive models?

With Amazon Web Services (AWS), software teams can build, train, deploy, and scale autoregressive models for generative AI applications more efficiently. With enterprise-grade security and managed infrastructure, AWS simplifies generative model development for businesses and reduces time-to-market. For example, you can use:

  • Amazon Bedrock, which is a managed service that provides foundational models you can use to customize and innovate with your own data. 
  • Amazon SageMaker to build, train, and deploy ML models for any use case.
  • AWS Trainium and AWS Inferentia to train, host, and scale generative AI applications on the cloud with high-performance, low-cost computing power. 

Get started with autoregressive models on AWS by creating an account today.

Next Steps on AWS

Check out additional product-related resources
Innovate faster with the most comprehensive set of AI and ML services 
Sign up for a free account

Instant get access to the AWS Free Tier.

Sign up 
Start building in the console

Get started building in the AWS management console.

Sign in