位置: IT常识 - 正文

AI实战:用Transformer建立数值时间序列预测模型开源代码汇总(ai implementation)

编辑:rootadmin
AI实战:用Transformer建立数值时间序列预测模型开源代码汇总 用Transformer建立数值时间序列预测模型开源代码汇总

推荐整理分享AI实战:用Transformer建立数值时间序列预测模型开源代码汇总(ai implementation),希望有所帮助,仅作参考,欢迎阅读内容。

文章相关热门搜索词:ai实战教程,ai(illustrator),illustrated transformer,ai运用,ai implementation,ai实现,ai运用,ai运用,内容如对您有帮助,希望把文章链接给更多的朋友!

Transformer是一个利用注意力机制来提高模型训练速度的模型。,trasnformer可以说是完全基于自注意力机制的一个深度学习模型,因为它适用于并行化计算,和它本身模型的复杂程度导致它在精度和性能上都要高于之前流行的RNN循环神经网络。

记录一下Transformer做数值时间序列预测的一下开源代码

time_series_forcasting代码地址 https://github.com/CVxTz/time_series_forecastingTransformer-Time-Series-Forecasting

代码地址 https://github.com/nklingen/Transformer-Time-Series-Forecasting

Article: https://natasha-klingenbrunn.medium.com/transformer-implementation-for-time-series-forecasting-a9db2db5c820 szZack的博客

Transformer_Time_Series

代码地址 https://github.com/mlpotter/Transformer_Time_Series

论文地址: Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting (NeurIPS 2019) https://arxiv.org/pdf/1907.00235.pdf

Non-AR Spatial-Temporal Transformer

Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting (submitted to ICML 2021).

We propose a Non-Autoregressive Transformer architecture for time series forecasting, aiming at overcoming the time delay and accumulative error issues in the canonical Transformer. Moreover, we present a novel spatial-temporal attention mechanism, building a bridge by a learned temporal influence map to fill the gaps between the spatial and temporal attention, so that spatial and temporal dependencies can be processed integrally.

论文地址:https://arxiv.org/pdf/2102.05624.pdf代码地址 https://github.com/Flawless1202/Non-AR-Spatial-Temporal-TransformerMultidimensional-time-series-with-transformer

Transformer/self-attention for Multidimensional time series forecasting 使用transformer架构实现多维时间预测

Rerfer to https://github.com/oliverguhr/transformer-time-series-prediction

代码地址 https://github.com/RuifMaxx/Multidimensional-time-series-with-transformer szZack的博客TCCT2021AI实战:用Transformer建立数值时间序列预测模型开源代码汇总(ai implementation)

Convolutional Transformer Architectures Complementary to Time Series Forecasting Transformer Models

Paper: TCCT: Tightly-Coupled Convolutional Transformer on Time Series Forecasting https://arxiv.org/abs/2108.12784

It has already been accepted by Neurocomputing:

Journal ref.: Neurocomputing, Volume 480, 1 April 2022, Pages 131-145

doi: 10.1016/j.neucom.2022.01.039

代码地址 https://github.com/OrigamiSL/TCCT2021-Neurocomputing-Time_Series_Transformers

Introduction This directory contains a Pytorch/Pytorch Lightning implementation of transformers applied to time series. We focus on Transformer-XL and Compressive Transformers.

Transformer-XL is described in this paper Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context by Zihang Dai*, Zhilin Yang*, Yiming Yang, Jaime Carbonell, Quoc V. Le, Ruslan Salakhutdinov (*: equal contribution) Preprint 2018.

Part of this code is from the authors at https://github.com/kimiyoung/transformer-xl.

代码地址 https://github.com/Emmanuel-R8/Time_Series_Transformers

Multi-Transformer: A new neural network-based architecture for forecasting S&P volatility

Transformer layers have already been successfully applied for NLP purposes. This repository adapts Transfomer layers in order to be used within hybrid volatility forecasting models. Following the intuition of bagging, this repository also introduces Multi-Transformer layers. The aim of this novel architecture is to improve the stability and accurateness of Transformer layers by averaging multiple attention mechanism.

The article collecting theoretical background and empirical results of the proposed model can be downloaded here. The stock volatility models based on Transformer and Multi-Transformer (T-GARCH, TL-GARCH, MT-GARCH and MTL-GARCH) overcome the performance of traditional autoregressive algorithms and other hybrid models based on feed forward layers or LSTM units. The following table collects the validation error (RMSE) by year and model.

代码地址 https://github.com/EduardoRamosP/MultiTransformer

szZack的博客

一个很好的完整的例子

代码 https://github.com/OrigamiSL/TCCT2021-Neurocomputing- https://github.com/zhouhaoyi/Informer2020

parser = argparse.ArgumentParser(description='[Informer] Long Sequences Forecasting')parser.add_argument('--model', type=str, required=True, default='informer',help='model of experiment, options: [informer, informerstack, informerlight(TBD)]')parser.add_argument('--data', type=str, required=True, default='ETTh1', help='data')parser.add_argument('--root_path', type=str, default='./data/ETT/', help='root path of the data file')parser.add_argument('--data_path', type=str, default='ETTh1.csv', help='data file') parser.add_argument('--features', type=str, default='M', help='forecasting task, options:[M, S, MS]; M:multivariate predict multivariate, S:univariate predict univariate, MS:multivariate predict univariate')parser.add_argument('--target', type=str, default='OT', help='target feature in S or MS task')parser.add_argument('--freq', type=str, default='h', help='freq for time features encoding, options:[s:secondly, t:minutely, h:hourly, d:daily, b:business days, w:weekly, m:monthly], you can also use more detailed freq like 15min or 3h')parser.add_argument('--checkpoints', type=str, default='./checkpoints/', help='location of model checkpoints')parser.add_argument('--seq_len', type=int, default=96, help='input sequence length of Informer encoder')parser.add_argument('--label_len', type=int, default=48, help='start token length of Informer decoder')parser.add_argument('--pred_len', type=int, default=24, help='prediction sequence length')# Informer decoder input: concat[start token series(label_len), zero padding series(pred_len)]parser.add_argument('--enc_in', type=int, default=7, help='encoder input size')parser.add_argument('--dec_in', type=int, default=7, help='decoder input size')parser.add_argument('--c_out', type=int, default=7, help='output size')parser.add_argument('--d_model', type=int, default=512, help='dimension of model')parser.add_argument('--n_heads', type=int, default=8, help='num of heads')parser.add_argument('--e_layers', type=int, default=2, help='num of encoder layers')parser.add_argument('--d_layers', type=int, default=1, help='num of decoder layers')parser.add_argument('--s_layers', type=str, default='3,2,1', help='num of stack encoder layers')parser.add_argument('--d_ff', type=int, default=2048, help='dimension of fcn')parser.add_argument('--factor', type=int, default=5, help='probsparse attn factor')parser.add_argument('--distil', action='store_false', help='whether to use distilling in encoder, using this argument means not using distilling', default=True)parser.add_argument('--CSP', action='store_true', help='whether to use CSPAttention, default=False', default=False)parser.add_argument('--dilated', action='store_true', help='whether to use dilated causal convolution in encoder, default=False', default=False)parser.add_argument('--passthrough', action='store_true', help='whether to use passthrough mechanism in encoder, default=False', default=False)parser.add_argument('--dropout', type=float, default=0.05, help='dropout')parser.add_argument('--attn', type=str, default='prob', help='attention used in encoder, options:[prob, full, log]')parser.add_argument('--embed', type=str, default='timeF', help='time features encoding, options:[timeF, fixed, learned]')parser.add_argument('--activation', type=str, default='gelu',help='activation')parser.add_argument('--output_attention', action='store_true', help='whether to output attention in encoder')parser.add_argument('--do_predict', action='store_true', help='whether to predict unseen future data')parser.add_argument('--num_workers', type=int, default=0, help='data loader num workers')parser.add_argument('--itr', type=int, default=2, help='experiments times')parser.add_argument('--train_epochs', type=int, default=6, help='train epochs')parser.add_argument('--batch_size', type=int, default=16, help='batch size of train input data')parser.add_argument('--patience', type=int, default=3, help='early stopping patience')parser.add_argument('--learning_rate', type=float, default=0.0001, help='optimizer learning rate')parser.add_argument('--des', type=str, default='test',help='exp description')parser.add_argument('--loss', type=str, default='mse',help='loss function')parser.add_argument('--lradj', type=str, default='type1',help='adjust learning rate')parser.add_argument('--use_amp', action='store_true', help='use automatic mixed precision training', default=False)parser.add_argument('--inverse', action='store_true', help='inverse output data', default=False)parser.add_argument('--use_gpu', type=bool, default=True, help='use gpu')parser.add_argument('--gpu', type=int, default=0, help='gpu')parser.add_argument('--use_multi_gpu', action='store_true', help='use multiple gpus', default=False)parser.add_argument('--devices', type=str, default='0,1,2,3',help='device ids of multile gpus')

szZack的博客

数据集 https://github.com/zhouhaoyi/ETDataset
本文链接地址:https://www.jiuchutong.com/zhishi/288792.html 转载请保留说明!

上一篇:js表单验证密码(确认密码),密码长度至少8位,并且英文与数字组合(js表单验证代码)

下一篇:最小的触屏手机是什么(最小的触屏手机有哪些)

  • 增值税附加税的会计处理
  • 累进税率的税种有哪些
  • 运输服务有3%的税率吗
  • 公司用的摄像头
  • 核定征收的固定资产包括
  • 资产减值损失在利润表怎么填列
  • 企业所得税允许税前扣除的五险一金
  • 计提分红款体现在利润表中哪一个
  • 支付给其他公司费用怎么入账
  • 变电站是否缴纳房产税
  • 收到小微企业补贴如何做账
  • 公司进项票不够怎么回事
  • 所得税汇算有研发费用可以不享受加计扣除吗
  • 未开票收入为负数如何写说明
  • 纳税总额包括
  • 环保税和环境税一样吗
  • 技术转让所得减半征收计算
  • 资产损益表怎么看
  • 购买软件会计账务处理
  • 土地增值税清算后补缴税款如何帐务处理
  • 申请办理银行承兑流程
  • 税务局代开发票需要什么资料
  • 企业 停业
  • 出口货物应退税额确认的会计分录
  • 注销企业很麻烦
  • 没有原始凭证可以审计吗
  • 健康检查查询系统
  • php 7z
  • 饭店开业请客说什么
  • 财政补助收入的账务处理
  • thinkphp i方法
  • swoole如何使用
  • 国家规定发票多久之内可以开
  • 会计科目备抵科目都有哪些
  • 前端页面设计
  • 遮天传官网
  • 基本数据结构包括哪些
  • 残保金员工人数怎么算
  • 所得税需要结转么
  • 预付款能不能开票入账
  • js中同步如何理解
  • sql server数据库正在恢复
  • 保证增信行通俗理解
  • 电缆租赁发票开具属于什么项目
  • 一般纳税人给小规模开普票的税率
  • 商业承兑汇票的流程
  • 科研项目财政拨款怎么算
  • 房开企业会计分录
  • 企业应付职工薪酬的会计核算毕业设计
  • 开具发票要注意方面是有哪些?
  • 带有折扣的增值税专用发票图片
  • 应收账款如何做平
  • 研发支出资本化支出期末怎么处理
  • 小规模30万含专票吗
  • 银行存款付款是借方还是贷方
  • 五险一金个人和公司缴费比例
  • 应付职工薪酬讲解
  • 组织机构代码证图片
  • mysql日志的作用
  • count(10,2,5)
  • mysql5.7解压版安装
  • Win10系统怎么打开IE浏览器
  • win7系统删除文件需要权限
  • win10安装驱动器
  • excel的exceladdinrd加载项出现问题
  • linux复制文件命令mv
  • python os.path模块
  • perl读取文件内容到数组
  • u盘备份系统操作步骤
  • jsonp如何解决跨域问题
  • 浅谈jquery中next与siblings的区别
  • Android---59---Toast的使用
  • bootstrap要学到什么程度
  • 厂房出租开增值税专用发票
  • 报废车税务怎么处理
  • 四川税务网络领发票流程
  • 总包发票税率
  • 山东省税务局在哪
  • 公路客运购票
  • 怎么注册山东省政府采购网
  • 免责声明:网站部分图片文字素材来源于网络,如有侵权,请及时告知,我们会第一时间删除,谢谢! 邮箱:opceo@qq.com

    鄂ICP备2023003026号

    网站地图: 企业信息 工商信息 财税知识 网络常识 编程技术

    友情链接: 武汉网站建设