位置: IT常识 - 正文

AI实战:用Transformer建立数值时间序列预测模型开源代码汇总(ai implementation)

编辑:rootadmin
AI实战:用Transformer建立数值时间序列预测模型开源代码汇总 用Transformer建立数值时间序列预测模型开源代码汇总

推荐整理分享AI实战:用Transformer建立数值时间序列预测模型开源代码汇总(ai implementation),希望有所帮助,仅作参考,欢迎阅读内容。

文章相关热门搜索词:ai实战教程,ai(illustrator),illustrated transformer,ai运用,ai implementation,ai实现,ai运用,ai运用,内容如对您有帮助,希望把文章链接给更多的朋友!

Transformer是一个利用注意力机制来提高模型训练速度的模型。,trasnformer可以说是完全基于自注意力机制的一个深度学习模型,因为它适用于并行化计算,和它本身模型的复杂程度导致它在精度和性能上都要高于之前流行的RNN循环神经网络。

记录一下Transformer做数值时间序列预测的一下开源代码

time_series_forcasting代码地址 https://github.com/CVxTz/time_series_forecastingTransformer-Time-Series-Forecasting

代码地址 https://github.com/nklingen/Transformer-Time-Series-Forecasting

Article: https://natasha-klingenbrunn.medium.com/transformer-implementation-for-time-series-forecasting-a9db2db5c820 szZack的博客

Transformer_Time_Series

代码地址 https://github.com/mlpotter/Transformer_Time_Series

论文地址: Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting (NeurIPS 2019) https://arxiv.org/pdf/1907.00235.pdf

Non-AR Spatial-Temporal Transformer

Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting (submitted to ICML 2021).

We propose a Non-Autoregressive Transformer architecture for time series forecasting, aiming at overcoming the time delay and accumulative error issues in the canonical Transformer. Moreover, we present a novel spatial-temporal attention mechanism, building a bridge by a learned temporal influence map to fill the gaps between the spatial and temporal attention, so that spatial and temporal dependencies can be processed integrally.

论文地址:https://arxiv.org/pdf/2102.05624.pdf代码地址 https://github.com/Flawless1202/Non-AR-Spatial-Temporal-TransformerMultidimensional-time-series-with-transformer

Transformer/self-attention for Multidimensional time series forecasting 使用transformer架构实现多维时间预测

Rerfer to https://github.com/oliverguhr/transformer-time-series-prediction

代码地址 https://github.com/RuifMaxx/Multidimensional-time-series-with-transformer szZack的博客TCCT2021AI实战:用Transformer建立数值时间序列预测模型开源代码汇总(ai implementation)

Convolutional Transformer Architectures Complementary to Time Series Forecasting Transformer Models

Paper: TCCT: Tightly-Coupled Convolutional Transformer on Time Series Forecasting https://arxiv.org/abs/2108.12784

It has already been accepted by Neurocomputing:

Journal ref.: Neurocomputing, Volume 480, 1 April 2022, Pages 131-145

doi: 10.1016/j.neucom.2022.01.039

代码地址 https://github.com/OrigamiSL/TCCT2021-Neurocomputing-Time_Series_Transformers

Introduction This directory contains a Pytorch/Pytorch Lightning implementation of transformers applied to time series. We focus on Transformer-XL and Compressive Transformers.

Transformer-XL is described in this paper Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov (*: equal contribution) Preprint 2018.

Part of this code is from the authors at https://github.com/kimiyoung/transformer-xl.

代码地址 https://github.com/Emmanuel-R8/Time_Series_Transformers

Multi-Transformer: A new neural network-based architecture for forecasting S&P volatility

Transformer layers have already been successfully applied for NLP purposes. This repository adapts Transfomer layers in order to be used within hybrid volatility forecasting models. Following the intuition of bagging, this repository also introduces Multi-Transformer layers. The aim of this novel architecture is to improve the stability and accurateness of Transformer layers by averaging multiple attention mechanism.

The article collecting theoretical background and empirical results of the proposed model can be downloaded here. The stock volatility models based on Transformer and Multi-Transformer (T-GARCH, TL-GARCH, MT-GARCH and MTL-GARCH) overcome the performance of traditional autoregressive algorithms and other hybrid models based on feed forward layers or LSTM units. The following table collects the validation error (RMSE) by year and model.

代码地址 https://github.com/EduardoRamosP/MultiTransformer

szZack的博客

一个很好的完整的例子

代码 https://github.com/OrigamiSL/TCCT2021-Neurocomputing- https://github.com/zhouhaoyi/Informer2020

parser = argparse.ArgumentParser(description='[Informer] Long Sequences Forecasting')parser.add_argument('--model', type=str, required=True, default='informer',help='model of experiment, options: [informer, informerstack, informerlight(TBD)]')parser.add_argument('--data', type=str, required=True, default='ETTh1', help='data')parser.add_argument('--root_path', type=str, default='./data/ETT/', help='root path of the data file')parser.add_argument('--data_path', type=str, default='ETTh1.csv', help='data file') parser.add_argument('--features', type=str, default='M', help='forecasting task, options:[M, S, MS]; M:multivariate predict multivariate, S:univariate predict univariate, MS:multivariate predict univariate')parser.add_argument('--target', type=str, default='OT', help='target feature in S or MS task')parser.add_argument('--freq', type=str, default='h', help='freq for time features encoding, options:[s:secondly, t:minutely, h:hourly, d:daily, b:business days, w:weekly, m:monthly], you can also use more detailed freq like 15min or 3h')parser.add_argument('--checkpoints', type=str, default='./checkpoints/', help='location of model checkpoints')parser.add_argument('--seq_len', type=int, default=96, help='input sequence length of Informer encoder')parser.add_argument('--label_len', type=int, default=48, help='start token length of Informer decoder')parser.add_argument('--pred_len', type=int, default=24, help='prediction sequence length')# Informer decoder input: concat[start token series(label_len), zero padding series(pred_len)]parser.add_argument('--enc_in', type=int, default=7, help='encoder input size')parser.add_argument('--dec_in', type=int, default=7, help='decoder input size')parser.add_argument('--c_out', type=int, default=7, help='output size')parser.add_argument('--d_model', type=int, default=512, help='dimension of model')parser.add_argument('--n_heads', type=int, default=8, help='num of heads')parser.add_argument('--e_layers', type=int, default=2, help='num of encoder layers')parser.add_argument('--d_layers', type=int, default=1, help='num of decoder layers')parser.add_argument('--s_layers', type=str, default='3,2,1', help='num of stack encoder layers')parser.add_argument('--d_ff', type=int, default=2048, help='dimension of fcn')parser.add_argument('--factor', type=int, default=5, help='probsparse attn factor')parser.add_argument('--distil', action='store_false', help='whether to use distilling in encoder, using this argument means not using distilling', default=True)parser.add_argument('--CSP', action='store_true', help='whether to use CSPAttention, default=False', default=False)parser.add_argument('--dilated', action='store_true', help='whether to use dilated causal convolution in encoder, default=False', default=False)parser.add_argument('--passthrough', action='store_true', help='whether to use passthrough mechanism in encoder, default=False', default=False)parser.add_argument('--dropout', type=float, default=0.05, help='dropout')parser.add_argument('--attn', type=str, default='prob', help='attention used in encoder, options:[prob, full, log]')parser.add_argument('--embed', type=str, default='timeF', help='time features encoding, options:[timeF, fixed, learned]')parser.add_argument('--activation', type=str, default='gelu',help='activation')parser.add_argument('--output_attention', action='store_true', help='whether to output attention in encoder')parser.add_argument('--do_predict', action='store_true', help='whether to predict unseen future data')parser.add_argument('--num_workers', type=int, default=0, help='data loader num workers')parser.add_argument('--itr', type=int, default=2, help='experiments times')parser.add_argument('--train_epochs', type=int, default=6, help='train epochs')parser.add_argument('--batch_size', type=int, default=16, help='batch size of train input data')parser.add_argument('--patience', type=int, default=3, help='early stopping patience')parser.add_argument('--learning_rate', type=float, default=0.0001, help='optimizer learning rate')parser.add_argument('--des', type=str, default='test',help='exp description')parser.add_argument('--loss', type=str, default='mse',help='loss function')parser.add_argument('--lradj', type=str, default='type1',help='adjust learning rate')parser.add_argument('--use_amp', action='store_true', help='use automatic mixed precision training', default=False)parser.add_argument('--inverse', action='store_true', help='inverse output data', default=False)parser.add_argument('--use_gpu', type=bool, default=True, help='use gpu')parser.add_argument('--gpu', type=int, default=0, help='gpu')parser.add_argument('--use_multi_gpu', action='store_true', help='use multiple gpus', default=False)parser.add_argument('--devices', type=str, default='0,1,2,3',help='device ids of multile gpus')

szZack的博客

数据集 https://github.com/zhouhaoyi/ETDataset
本文链接地址:https://www.jiuchutong.com/zhishi/288792.html 转载请保留说明!

上一篇:js表单验证密码(确认密码),密码长度至少8位,并且英文与数字组合(js表单验证代码)

下一篇:最小的触屏手机是什么(最小的触屏手机有哪些)

  • 企业所得税年报补报
  • 小规模普票怎么冲红
  • 企业所得税汇算清缴时间
  • 核定征收一般纳什么税
  • 土增清算尾盘销售
  • 在公司交社保不满十年,女性按照什么退税
  • 银行日记账写错了怎么改
  • 纳税申报现金流量表报错了可以重新申报吗
  • 租入房租装修费摊销
  • 贷款应计利息会计分录
  • 医院收到卫生局补助会计分录怎么写
  • 盘亏设备一台
  • 扣个税必须要交社保吗
  • 汇算清缴补交所得税的账务处理
  • 劳务费开发票还要代扣代缴吗?
  • 转登记日下期指的是什么
  • 呆帐死帐处理
  • 分公司固定资产转入总公司的分录怎么做?
  • 可供出售金融资产属于流动资产吗
  • 发票在验旧日期之后作废吗
  • 移动手机网速测试
  • antd怎么用
  • php 正则表达式
  • 资产现金流量收益率计算方法
  • ✝️ 强制 Vue 重新渲染组件的正确方法
  • php实现文件上传下载
  • 分公司怎么开独立开票
  • reactjs路由跳转
  • ai作画app
  • vue项目打包后还能修改吗
  • 企业所有的支出是什么
  • 微信开发怎么实现
  • nodejs的安装与配置mac
  • open开放的意思吗
  • 在建工程怎么填
  • 补交地税多少钱
  • 销售黄金的会计分录
  • 股权转让所得如何申报个税
  • SqlServer 2005 T-SQL Query 学习笔记(2)
  • 公司找个人干活
  • 实收资本没有实缴,财务报表里面怎么写
  • 固定资产的折旧账务处理
  • 信息科技领域的违法犯罪行为
  • 以前年度损益调整怎么做账
  • 超市返利账务处理
  • 实收资本的会计编码
  • 应付账款明细账怎么登记
  • 油卡充值做账
  • 用于出租的设备属于什么资产
  • 土地出让合同的签订主体
  • 利润表每股收益增加说明什么
  • 会计账簿按外表可分为
  • 日记账的标准格式是
  • sql 随机
  • sql多条件组合条件的先后顺序
  • win7系统如何打开
  • windows server 2008 r2激活密钥
  • 快速切换用户是什么意思
  • win10用浏览器
  • 360卫士重装电脑够进入不了桌面
  • wincomp.exe - wincomp进程是什么意思
  • 进程crash是什么意思
  • windows英文字体
  • LINUX系统下MySQL 压力测试工具super smack
  • .mcp是什么文件
  • linux中怎么在文件中添加内容
  • win7如何变快
  • [置顶] [寒江孤叶丶的Cocos2d-x之旅_27]CocoStudio导出的LUA文件怎么使用?
  • web ui控件
  • python调用cuda执行加法
  • JavaScript中Number.MAX_VALUE属性的使用方法
  • unity c++ dll
  • python+Django+apache的配置方法详解
  • 航天金税盘客服电话苏州
  • 定额发票查询app
  • 金税盘联网步骤
  • 如何开具红字发票明细
  • 五四新文化运动究竟新在哪里
  • 无锡第三税务分局
  • 广西定额发票查询入口官网
  • 免责声明:网站部分图片文字素材来源于网络,如有侵权,请及时告知,我们会第一时间删除,谢谢! 邮箱:opceo@qq.com

    鄂ICP备2023003026号

    网站地图: 企业信息 工商信息 财税知识 网络常识 编程技术

    友情链接: 武汉网站建设