位置: IT常识 - 正文

TensorFlow安装教程(tensorflow安装教程pycharm)

编辑:rootadmin
TensorFlow安装教程

推荐整理分享TensorFlow安装教程(tensorflow安装教程pycharm),希望有所帮助,仅作参考,欢迎阅读内容。

文章相关热门搜索词:tensorflow安装教程,tensorflow安装教程pycharm,tensorflow安装教程演讲,tensorflow安装教程windows,tensorflow安装教程CPU,tensorflow安装教程CPU,tensorflow安装教程CPU,tensorflow安装教程windows,内容如对您有帮助,希望把文章链接给更多的朋友!

诸神缄默不语-个人CSDN博文目录

TensorFlow是学习深度学习时常用的Python神经网络框架,本文将介绍其部分版本在Linux系统使用pip进行安装的方法。 (注:TensorFlow官方推荐使用pip进行安装。)

作者使用anaconda作为管理虚拟环境的工具。以下工作都在虚拟环境中进行,对Python和Aanaconda的安装及对虚拟环境的管理本文不作赘述,后期可能会撰写相关的博文。

首先进入官网:TensorFlow TensorFlow安装的总界面:Install TensorFlow 2

文章目录1. TensorFlow 2最新版安装(本文撰写时为2.9.0)2. TensorFlow 1.14 + Keras 2.3.1(安装时间:2022.8.17)3. 其他本文撰写过程中使用的参考资料1. TensorFlow 2最新版安装(本文撰写时为2.9.0)

官方安装指南:Install TensorFlow 2 用pip安装的指南:Install TensorFlow with pip TensorFlow基础的系统环境等要求可直接在该网站上查看,已经2022年了,一般电脑都不会这么老吧。

新建anaconda虚拟环境:conda create -n envtf2 python==3.8(Python版本需要是3.7-3.10,本文以3.8为例,主要是因为我需要用3.8版本来安装另一个包) 激活虚拟环境:conda activate envtf2 如果要使用cuda,首先确定本机安装有NVIDIA GPU driver:nvidia-smi(一般都会有的吧,没有的话到得了这一步吗) 安装指定的cudatoolkit和cudnn版本:conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0 有两种指定配置路径的方式: ①临时的,每次会话都需要先激活虚拟环境然后:export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/ ②自动在每次激活虚拟环境后执行此操作(我没有试过,我一直都用的是上面那种方式):

mkdir -p $CONDA_PREFIX/etc/conda/activate.decho 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/' > $CONDA_PREFIX/etc/conda/activate.d/env_vars.sh

更新pip:pip install --upgrade pip 安装TensorFlow:pip install tensorflow 检验CPU版TensorFlow是否可用:python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))" (我的服务器有4张卡) 检验GPU版TensorFlow是否可用:python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

注意,以上操作是在终端上进行的,不能直接放到jupyter notebook。一个失败的例子: 在jupyter notebook上,我直接调用!export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/也不行,用os.environ['LD_LIBRARY_PATH']也不行,用$env也不行,就把我整得相当困惑。我看了一下,好像如果用jupyter notebook的话就必须要修改jupyter内核才能用,但是我修改了jupyter kernelspec list路径中的kernel.json后仍然不行。(参考自python - How to set env variable in Jupyter notebook - Stack Overflow) 其他我在网上有看到一些使用全局配置解决此问题的方法,但是我这个服务器上还需要运行别的版本的别的项目,总之不太方便用这个。一般来说我对此问题的解决方法就是不用jupyter notebook来跑TF项目。我看的这些资料可资参考: 解决TensorFlow在terminal中正常但在jupyter notebook中报错的方案 - stardsd - 博客园 Add CUDA Library Path to Jupyterhub Notebook - AIML - wiki.ucar.edu install pytorch with jupyter - 知乎

所以jupyter notebook上要成功使用TensorFlow GPU功能的话就必须要先在命令行上激活虚拟环境,然后export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$CONDA_PREFIX/lib/,然后调用jupyter notebook命令打开jupyter notebook,这样就能直接正常使用了。 (注意:如果仅安装了ipykernel包,那么VSCode中可以打开notebook文件,但是无法使用jupyter notebook打开能够在浏览器中打开的网页,因此需要安装jupyterlab:pip install jupyterlab(参考Project Jupyter | Installing Jupyter)。VSCode即使在远程服务器上也可以把端口转到本地使用localhost域名在本地浏览器打开,挺方便的) 运行成功的效果:

2. TensorFlow 1.14 + Keras 2.3.1(安装时间:2022.8.17)

这个是苏神bert4keras(https://github.com/bojone/bert4keras)的配置。

TensorFlow安装教程(tensorflow安装教程pycharm)

见TensorFlow官网(使用 pip 安装 TensorFlow),仅TensorFlow2.2以上支持Python3.8以上,所以我需要一个Python3.7的环境。 新建anaconda虚拟环境:conda create -n envtf114 python=3.7 pip 安装GPU版TensorFlow:pip install tensorflow-gpu==1.14

试用如下代码(来自tensorflow-gpu1.14代码测试_爱听许嵩歌的博客-CSDN博客_tensorflow-gpu测试代码):

import tensorflow as tfwith tf.device('/cpu:0'): a = tf.constant([1.0, 2.0, 3.0], shape=[3], name='a') b = tf.constant([1.0, 2.0, 3.0], shape=[3], name='b')with tf.device('/gpu:2'): c = a + b# 注意:allow_soft_placement=True表明:计算设备可自行选择,如果没有这个参数,会报错。# 因为不是所有的操作都可以被放在GPU上,如果强行将无法放在GPU上的操作指定到GPU上,将会报错。sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True))# sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))sess.run(tf.global_variables_initializer())print(sess.run(c))

报错:

Traceback (most recent call last): File "trytf1.py", line 1, in <module> import tensorflow as tf File "env_path/lib/python3.7/site-packages/tensorflow/__init__.py", line 28, in <module> from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import File "env_path/lib/python3.7/site-packages/tensorflow/python/__init__.py", line 52, in <module> from tensorflow.core.framework.graph_pb2 import * File "env_path/lib/python3.7/site-packages/tensorflow/core/framework/graph_pb2.py", line 16, in <module> from tensorflow.core.framework import node_def_pb2 as tensorflow_dot_core_dot_framework_dot_node__def__pb2 File "env_path/lib/python3.7/site-packages/tensorflow/core/framework/node_def_pb2.py", line 16, in <module> from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2 File "env_path/lib/python3.7/site-packages/tensorflow/core/framework/attr_value_pb2.py", line 16, in <module> from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2 File "env_path/lib/python3.7/site-packages/tensorflow/core/framework/tensor_pb2.py", line 16, in <module> from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2 File "env_path/lib/python3.7/site-packages/tensorflow/core/framework/resource_handle_pb2.py", line 42, in <module> serialized_options=None, file=DESCRIPTOR), File "/home/wanghuijuan/anaconda3/envs/envtf114/lib/python3.7/site-packages/google/protobuf/descriptor.py", line 560, in __new__ _message.Message._CheckCalledFromGeneratedFile()TypeError: Descriptors cannot not be created directly.If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.If you cannot immediately regenerate your protos, some other possible workarounds are: 1. Downgrade the protobuf package to 3.20.x or lower. 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates

嗯虽然不知道发生了什么总之我从善如流地照着改(参考1. Downgrade the protobuf package to 3.20.x or lower._weixin_44834086的博客-CSDN博客):

pip install protobuf==3.19.0

然后重新运行代码,这回的输出信息变成了:

env_path/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)])env_path/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)])env_path/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)])env_path/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)])env_path/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)])env_path/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)])env_path/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint8 = np.dtype([("qint8", np.int8, 1)])env_path/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint8 = np.dtype([("quint8", np.uint8, 1)])env_path/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint16 = np.dtype([("qint16", np.int16, 1)])env_path/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_quint16 = np.dtype([("quint16", np.uint16, 1)])env_path/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. _np_qint32 = np.dtype([("qint32", np.int32, 1)])env_path/lib/python3.7/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'. np_resource = np.dtype([("resource", np.ubyte, 1)])WARNING:tensorflow:From trytf1.py:11: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.WARNING:tensorflow:From trytf1.py:11: The name tf.ConfigProto is deprecated. Please use tf.compat.v1.ConfigProto instead.2022-08-17 15:27:08.829308: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA2022-08-17 15:27:08.865801: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 1800000000 Hz2022-08-17 15:27:08.867967: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55f031fff480 executing computations on platform Host. Devices:2022-08-17 15:27:08.868039: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): <undefined>, <undefined>2022-08-17 15:27:08.871550: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcuda.so.12022-08-17 15:27:09.628470: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x55f0332f1040 executing computations on platform CUDA. Devices:2022-08-17 15:27:09.628540: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (0): Tesla T4, Compute Capability 7.52022-08-17 15:27:09.628560: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (1): Tesla T4, Compute Capability 7.52022-08-17 15:27:09.628580: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (2): Tesla T4, Compute Capability 7.52022-08-17 15:27:09.628597: I tensorflow/compiler/xla/service/service.cc:175] StreamExecutor device (3): Tesla T4, Compute Capability 7.52022-08-17 15:27:09.645921: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 0 with properties: name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59pciBusID: 0000:3b:00.02022-08-17 15:27:09.650885: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 1 with properties: name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59pciBusID: 0000:5e:00.02022-08-17 15:27:09.652426: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 2 with properties: name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59pciBusID: 0000:b1:00.02022-08-17 15:27:09.653863: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1640] Found device 3 with properties: name: Tesla T4 major: 7 minor: 5 memoryClockRate(GHz): 1.59pciBusID: 0000:d9:00.02022-08-17 15:27:09.654104: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcudart.so.10.0'; dlerror: libcudart.so.10.0: cannot open shared object file: No such file or directory2022-08-17 15:27:09.654223: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcublas.so.10.0'; dlerror: libcublas.so.10.0: cannot open shared object file: No such file or directory2022-08-17 15:27:09.654332: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcufft.so.10.0'; dlerror: libcufft.so.10.0: cannot open shared object file: No such file or directory2022-08-17 15:27:09.654437: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcurand.so.10.0'; dlerror: libcurand.so.10.0: cannot open shared object file: No such file or directory2022-08-17 15:27:09.654540: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcusolver.so.10.0'; dlerror: libcusolver.so.10.0: cannot open shared object file: No such file or directory2022-08-17 15:27:09.654661: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcusparse.so.10.0'; dlerror: libcusparse.so.10.0: cannot open shared object file: No such file or directory2022-08-17 15:27:09.740681: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.72022-08-17 15:27:09.740741: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1663] Cannot dlopen some GPU libraries. Skipping registering GPU devices...2022-08-17 15:27:09.740824: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:2022-08-17 15:27:09.740848: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0 1 2 3 2022-08-17 15:27:09.740868: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N Y Y Y 2022-08-17 15:27:09.740886: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 1: Y N Y Y 2022-08-17 15:27:09.740904: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 2: Y Y N Y 2022-08-17 15:27:09.740921: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 3: Y Y Y N Device mapping:/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:XLA_GPU:1 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:XLA_GPU:2 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:XLA_GPU:3 -> device: XLA_GPU device2022-08-17 15:27:09.742912: I tensorflow/core/common_runtime/direct_session.cc:296] Device mapping:/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:XLA_GPU:1 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:XLA_GPU:2 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:XLA_GPU:3 -> device: XLA_GPU deviceWARNING:tensorflow:From trytf1.py:13: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.add: (Add): /job:localhost/replica:0/task:0/device:CPU:02022-08-17 15:27:09.746145: I tensorflow/core/common_runtime/placer.cc:54] add: (Add)/job:localhost/replica:0/task:0/device:CPU:0init: (NoOp): /job:localhost/replica:0/task:0/device:CPU:02022-08-17 15:27:09.746214: I tensorflow/core/common_runtime/placer.cc:54] init: (NoOp)/job:localhost/replica:0/task:0/device:CPU:0a: (Const): /job:localhost/replica:0/task:0/device:CPU:02022-08-17 15:27:09.746257: I tensorflow/core/common_runtime/placer.cc:54] a: (Const)/job:localhost/replica:0/task:0/device:CPU:0b: (Const): /job:localhost/replica:0/task:0/device:CPU:02022-08-17 15:27:09.746294: I tensorflow/core/common_runtime/placer.cc:54] b: (Const)/job:localhost/replica:0/task:0/device:CPU:02022-08-17 15:27:09.748161: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set. If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU. To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.[2. 4. 6.]

其他略,总之那一堆无法打开so文件就说明cuda安装有问题,无法使用GPU。

TensorFlow版本对应GPU版本(图源https://www.tensorflow.org/install/source#gpu): 所以首先安装所需的cudnn和cuda:conda install -c conda-forge cudatoolkit=10.0 cudnn=7.4

报了个非常诡异的bug:

Collecting package metadata (current_repodata.json): doneSolving environment: failed with initial frozen solve. Retrying with flexible solve.Collecting package metadata (repodata.json): failed# >>>>>>>>>>>>>>>>>>>>>> ERROR REPORT <<<<<<<<<<<<<<<<<<<<<< Traceback (most recent call last): File "anaconda3/lib/python3.9/site-packages/urllib3/response.py", line 700, in _update_chunk_length self.chunk_left = int(line, 16) ValueError: invalid literal for int() with base 16: b'' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "anaconda3/lib/python3.9/site-packages/urllib3/response.py", line 441, in _error_catcher yield File "anaconda3/lib/python3.9/site-packages/urllib3/response.py", line 767, in read_chunked self._update_chunk_length() File "anaconda3/lib/python3.9/site-packages/urllib3/response.py", line 704, in _update_chunk_length raise InvalidChunkLength(self, line) urllib3.exceptions.InvalidChunkLength: InvalidChunkLength(got length b'', 0 bytes read) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "anaconda3/lib/python3.9/site-packages/requests/models.py", line 760, in generate for chunk in self.raw.stream(chunk_size, decode_content=True): File "anaconda3/lib/python3.9/site-packages/urllib3/response.py", line 575, in stream for line in self.read_chunked(amt, decode_content=decode_content): File "anaconda3/lib/python3.9/site-packages/urllib3/response.py", line 796, in read_chunked self._original_response.close() File "anaconda3/lib/python3.9/contextlib.py", line 137, in __exit__ self.gen.throw(typ, value, traceback) File "anaconda3/lib/python3.9/site-packages/urllib3/response.py", line 458, in _error_catcher raise ProtocolError("Connection broken: %r" % e, e) urllib3.exceptions.ProtocolError: ("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes read)) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "anaconda3/lib/python3.9/site-packages/conda/exceptions.py", line 1114, in __call__ return func(*args, **kwargs) File "anaconda3/lib/python3.9/site-packages/conda/cli/main.py", line 86, in main_subshell exit_code = do_call(args, p) File "anaconda3/lib/python3.9/site-packages/conda/cli/conda_argparse.py", line 90, in do_call return getattr(module, func_name)(args, parser) File "anaconda3/lib/python3.9/site-packages/conda/cli/main_install.py", line 20, in execute install(args, parser, 'install') File "anaconda3/lib/python3.9/site-packages/conda/cli/install.py", line 259, in install unlink_link_transaction = solver.solve_for_transaction( File "anaconda3/lib/python3.9/site-packages/conda/core/solve.py", line 152, in solve_for_transaction unlink_precs, link_precs = self.solve_for_diff(update_modifier, deps_modifier, File "anaconda3/lib/python3.9/site-packages/conda/core/solve.py", line 195, in solve_for_diff final_precs = self.solve_final_state(update_modifier, deps_modifier, prune, ignore_pinned, File "anaconda3/lib/python3.9/site-packages/conda/core/solve.py", line 300, in solve_final_state ssc = self._collect_all_metadata(ssc) File "anaconda3/lib/python3.9/site-packages/conda/common/io.py", line 86, in decorated return f(*args, **kwds) File "anaconda3/lib/python3.9/site-packages/conda/core/solve.py", line 463, in _collect_all_metadata index, r = self._prepare(prepared_specs) File "anaconda3/lib/python3.9/site-packages/conda/core/solve.py", line 1058, in _prepare reduced_index = get_reduced_index(self.prefix, self.channels, File "anaconda3/lib/python3.9/site-packages/conda/core/index.py", line 287, in get_reduced_index new_records = SubdirData.query_all(spec, channels=channels, subdirs=subdirs, File "anaconda3/lib/python3.9/site-packages/conda/core/subdir_data.py", line 139, in query_all result = tuple(concat(executor.map(subdir_query, channel_urls))) File "anaconda3/lib/python3.9/concurrent/futures/_base.py", line 609, in result_iterator yield fs.pop().result() File "anaconda3/lib/python3.9/concurrent/futures/_base.py", line 446, in result return self.__get_result() File "anaconda3/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "anaconda3/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "anaconda3/lib/python3.9/site-packages/conda/core/subdir_data.py", line 131, in <lambda> subdir_query = lambda url: tuple(SubdirData(Channel(url), repodata_fn=repodata_fn).query( File "anaconda3/lib/python3.9/site-packages/conda/core/subdir_data.py", line 144, in query self.load() File "anaconda3/lib/python3.9/site-packages/conda/core/subdir_data.py", line 209, in load _internal_state = self._load() File "anaconda3/lib/python3.9/site-packages/conda/core/subdir_data.py", line 374, in _load raw_repodata_str = fetch_repodata_remote_request( File "anaconda3/lib/python3.9/site-packages/conda/core/subdir_data.py", line 700, in fetch_repodata_remote_request resp = session.get(join_url(url, filename), headers=headers, proxies=session.proxies, File "anaconda3/lib/python3.9/site-packages/requests/sessions.py", line 542, in get return self.request('GET', url, **kwargs) File "anaconda3/lib/python3.9/site-packages/requests/sessions.py", line 529, in request resp = self.send(prep, **send_kwargs) File "anaconda3/lib/python3.9/site-packages/requests/sessions.py", line 687, in send r.content File "anaconda3/lib/python3.9/site-packages/requests/models.py", line 838, in content self._content = b''.join(self.iter_content(CONTENT_CHUNK_SIZE)) or b'' File "anaconda3/lib/python3.9/site-packages/requests/models.py", line 763, in generate raise ChunkedEncodingError(e) requests.exceptions.ChunkedEncodingError: ("Connection broken: InvalidChunkLength(got length b'', 0 bytes read)", InvalidChunkLength(got length b'', 0 bytes read))`$ /home/wanghuijuan/anaconda3/bin/conda install -c conda-forge cudatoolkit=10.0 cudnn=7.4` environment variables: CIO_TEST=<not set> CONDA_DEFAULT_ENV= CONDA_EXE=anaconda3/bin/conda CONDA_PREFIX= CONDA_PREFIX_1=anaconda3 CONDA_PREFIX_2= CONDA_PROMPT_MODIFIER=() CONDA_PYTHON_EXE=anaconda3/bin/python CONDA_ROOT=anaconda3 CONDA_SHLVL=3 CURL_CA_BUNDLE=<not set> PATH=/anaconda3/condabin:/home/wanghuijuan/.vscode -server/bin/6d9b74a70ca9c7733b29f0456fd8195364076dda/bin/remote-cli:/u sr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games: /usr/local/games:/snap/bin REQUESTS_CA_BUNDLE=<not set> SSL_CERT_FILE=<not set> active environment : active env location : shell level : 3 user config file : .condarc populated config files : conda version : 4.13.0 conda-build version : 3.21.8 python version : 3.9.12.final.0 virtual packages : __cuda=11.4=0 __linux=4.15.0=0 __glibc=2.27=0 __unix=0=0 __archspec=1=x86_64 base environment :anaconda3 (writable) conda av data dir :anaconda3/etc/conda conda av metadata url : None channel URLs : https://conda.anaconda.org/conda-forge/linux-64 https://conda.anaconda.org/conda-forge/noarch https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch package cache : /home/wanghuijuan/anaconda3/pkgs /home/wanghuijuan/.conda/pkgs envs directories : /home/wanghuijuan/anaconda3/envs /home/wanghuijuan/.conda/envs platform : linux-64 user-agent : conda/4.13.0 requests/2.27.1 CPython/3.9.12 Linux/4.15.0-136-generic ubuntu/18.04.4 glibc/2.27 UID:GID : 1018:1014 netrc file : None offline mode : FalseAn unexpected error has occurred. Conda has prepared the above report.If submitted, this report will be used by core maintainers to improvefuture releases of conda.Would you like conda to send this report to the core maintainers?[y/N]: yUpload did not complete.Thank you for helping to improve conda.Opt-in to always sending reports (and not see this message again)by running $ conda config --set report_errors true

不知道发生了什么,总之换一种安装方式好了(参考TensorFlow-gpu安装和测试(TensorFlow-gpu1.14+Cuda10)_爱学习的小龙的博客-CSDN博客_tensorflowgpu测试):

wget -P files/install_packages https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/main/linux-64/cudatoolkit-10.0.130-0.condaconda install files/install_packages/cudatoolkit-10.0.130-0.conda

重新运行Python代码。和之前一样的输出部分就不写了,直接从不一样的地方开始:

2022-08-17 16:15:43.407219: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.02022-08-17 16:15:43.409338: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcublas.so.10.02022-08-17 16:15:43.411111: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcufft.so.10.02022-08-17 16:15:43.411878: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcurand.so.10.02022-08-17 16:15:43.415478: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusolver.so.10.02022-08-17 16:15:43.418072: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcusparse.so.10.02022-08-17 16:15:43.424901: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudnn.so.72022-08-17 16:15:43.435064: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1763] Adding visible gpu devices: 0, 1, 2, 32022-08-17 16:15:43.435492: I tensorflow/stream_executor/platform/default/dso_loader.cc:42] Successfully opened dynamic library libcudart.so.10.02022-08-17 16:15:43.441476: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1181] Device interconnect StreamExecutor with strength 1 edge matrix:2022-08-17 16:15:43.442070: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1187] 0 1 2 3 2022-08-17 16:15:43.442448: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 0: N Y Y Y 2022-08-17 16:15:43.443431: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 1: Y N Y Y 2022-08-17 16:15:43.444206: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 2: Y Y N Y 2022-08-17 16:15:43.444586: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1200] 3: Y Y Y N 2022-08-17 16:15:43.452440: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 2446 MB memory) -> physical GPU (device: 0, name: Tesla T4, pci bus id: 0000:3b:00.0, compute capability: 7.5)2022-08-17 16:15:43.462938: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 5244 MB memory) -> physical GPU (device: 1, name: Tesla T4, pci bus id: 0000:5e:00.0, compute capability: 7.5)2022-08-17 16:15:43.469831: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:2 with 14259 MB memory) -> physical GPU (device: 2, name: Tesla T4, pci bus id: 0000:b1:00.0, compute capability: 7.5)2022-08-17 16:15:43.483509: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1326] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:3 with 14259 MB memory) -> physical GPU (device: 3, name: Tesla T4, pci bus id: 0000:d9:00.0, compute capability: 7.5)Device mapping:/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:XLA_GPU:1 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:XLA_GPU:2 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:XLA_GPU:3 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla T4, pci bus id: 0000:3b:00.0, compute capability: 7.5/job:localhost/replica:0/task:0/device:GPU:1 -> device: 1, name: Tesla T4, pci bus id: 0000:5e:00.0, compute capability: 7.5/job:localhost/replica:0/task:0/device:GPU:2 -> device: 2, name: Tesla T4, pci bus id: 0000:b1:00.0, compute capability: 7.5/job:localhost/replica:0/task:0/device:GPU:3 -> device: 3, name: Tesla T4, pci bus id: 0000:d9:00.0, compute capability: 7.52022-08-17 16:15:43.490300: I tensorflow/core/common_runtime/direct_session.cc:296] Device mapping:/job:localhost/replica:0/task:0/device:XLA_CPU:0 -> device: XLA_CPU device/job:localhost/replica:0/task:0/device:XLA_GPU:0 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:XLA_GPU:1 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:XLA_GPU:2 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:XLA_GPU:3 -> device: XLA_GPU device/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla T4, pci bus id: 0000:3b:00.0, compute capability: 7.5/job:localhost/replica:0/task:0/device:GPU:1 -> device: 1, name: Tesla T4, pci bus id: 0000:5e:00.0, compute capability: 7.5/job:localhost/replica:0/task:0/device:GPU:2 -> device: 2, name: Tesla T4, pci bus id: 0000:b1:00.0, compute capability: 7.5/job:localhost/replica:0/task:0/device:GPU:3 -> device: 3, name: Tesla T4, pci bus id: 0000:d9:00.0, compute capability: 7.5WARNING:tensorflow:From /home/wanghuijuan/whj_code1/trytf1.py:13: The name tf.global_variables_initializer is deprecated. Please use tf.compat.v1.global_variables_initializer instead.add: (Add): /job:localhost/replica:0/task:0/device:GPU:22022-08-17 16:15:43.495600: I tensorflow/core/common_runtime/placer.cc:54] add: (Add)/job:localhost/replica:0/task:0/device:GPU:2init: (NoOp): /job:localhost/replica:0/task:0/device:GPU:02022-08-17 16:15:43.495642: I tensorflow/core/common_runtime/placer.cc:54] init: (NoOp)/job:localhost/replica:0/task:0/device:GPU:0a: (Const): /job:localhost/replica:0/task:0/device:CPU:02022-08-17 16:15:43.495664: I tensorflow/core/common_runtime/placer.cc:54] a: (Const)/job:localhost/replica:0/task:0/device:CPU:0b: (Const): /job:localhost/replica:0/task:0/device:CPU:02022-08-17 16:15:43.495682: I tensorflow/core/common_runtime/placer.cc:54] b: (Const)/job:localhost/replica:0/task:0/device:CPU:0[2. 4. 6.]

运行TensorFlow代码时需要在代码前加上这些:

import tensorflow as tfimport osos.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" os.environ["CUDA_VISIBLE_DEVICES"] = "2" #这里是gpu的序号,指定使用的gpu对象config = tf.ConfigProto()config.gpu_options.allow_growth = True

Keras就自动安装好了。

直接用bert4keras代码来举个实际用例:

pip install bert4keraswget -P /data/pretrained_model/chinese_L-12_H-768_A-12 https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zipunzip /data/pretrained_model/chinese_L-12_H-768_A-12/chinese_L-12_H-768_A-12.zip -d /data/pretrained_model/chinese_L-12_H-768_A-12

代码(改自https://github.com/bojone/bert4keras/blob/master/examples/basic_extract_features.py):

import osos.environ["CUDA_VISIBLE_DEVICES"] = "2"import timeimport numpy as npfrom bert4keras.backend import kerasfrom bert4keras.models import build_transformer_modelfrom bert4keras.tokenizers import Tokenizerfrom bert4keras.snippets import to_arrayconfig_path = '/data/pretrained_model/chinese_L-12_H-768_A-12/chinese_L-12_H-768_A-12/bert_config.json'checkpoint_path = '/data/pretrained_model/chinese_L-12_H-768_A-12/chinese_L-12_H-768_A-12/bert_model.ckpt'dict_path = '/data/pretrained_model/chinese_L-12_H-768_A-12/chinese_L-12_H-768_A-12/vocab.txt'tokenizer = Tokenizer(dict_path, do_lower_case=True) # 建立分词器model = build_transformer_model(config_path, checkpoint_path) # 建立模型,加载权重# 编码测试token_ids, segment_ids = tokenizer.encode(u'语言模型')token_ids, segment_ids = to_array([token_ids], [segment_ids])print('\n ===== predicting =====\n')print(model.predict([token_ids, segment_ids]))"""输出:[[[-0.63251007 0.2030236 0.07936534 ... 0.49122632 -0.20493352 0.2575253 ] [-0.7588351 0.09651865 1.0718756 ... -0.6109694 0.04312154 0.03881441] [ 0.5477043 -0.792117 0.44435206 ... 0.42449304 0.41105673 0.08222899] [-0.2924238 0.6052722 0.49968526 ... 0.8604137 -0.6533166 0.5369075 ] [-0.7473459 0.49431565 0.7185162 ... 0.3848612 -0.74090636 0.39056838] [-0.8741375 -0.21650358 1.338839 ... 0.5816864 -0.4373226 0.56181806]]]"""time.sleep(100)

time.sleep()命令是为了停留一下,显式用nvidia-smi命令看这个程序只在卡2上占用空间,以及具体占用了多大的GPU(占了14G,还是相当大的)

3. 其他本文撰写过程中使用的参考资料tensorflow 1.14指定gpu运行设置_愚昧之山绝望之谷开悟之坡的博客-CSDN博客_tensorflow指定gpu
本文链接地址:https://www.jiuchutong.com/zhishi/290892.html 转载请保留说明!

上一篇:Web大学生网页作业成品——易购商城网站设计与实现(HTML+CSS+JavaScript)(大学网页制作作业dw)

下一篇:德拉海滩Wakodahatchee湿地的大蓝鹭,佛罗里达州 (© Marie Hickman/Getty Images)(海滨德拉海滩庄园别墅)

  • 优酷怎么使用全景视频教程(优酷如何全屏观看)

    优酷怎么使用全景视频教程(优酷如何全屏观看)

  • 苹果11支持戴口罩面部解锁吗(苹果11支持戴口罩识别吗)

    苹果11支持戴口罩面部解锁吗(苹果11支持戴口罩识别吗)

  • 荣耀magic3至臻版是双卡双待吗(荣耀magic3至臻版上市价格)

    荣耀magic3至臻版是双卡双待吗(荣耀magic3至臻版上市价格)

  • 爱普生打印机怎么连接wifi(爱普生打印机怎么连接手机)

    爱普生打印机怎么连接wifi(爱普生打印机怎么连接手机)

  • ipad2021多少毫安电池(ipad多少毫安电池)

    ipad2021多少毫安电池(ipad多少毫安电池)

  • 华为手机自拍照片是反的怎么办(华为手机自拍照片发黄要怎么设置)

    华为手机自拍照片是反的怎么办(华为手机自拍照片发黄要怎么设置)

  • cad圆怎么等分(cad圆怎么等分后怎么测量)

    cad圆怎么等分(cad圆怎么等分后怎么测量)

  • 企业微信为什么删不掉(企业微信为什么没有消息提醒)

    企业微信为什么删不掉(企业微信为什么没有消息提醒)

  • 正在准备windows请不要关闭你的计算机怎么办卡住了(正在准备windows请勿关机很久)

    正在准备windows请不要关闭你的计算机怎么办卡住了(正在准备windows请勿关机很久)

  • 抖音名字怎么改不了(抖音名字怎么改不了保存不了)

    抖音名字怎么改不了(抖音名字怎么改不了保存不了)

  • 苹果短信绿色是发送出去了吗(苹果短信绿色是要扣钱吗)

    苹果短信绿色是发送出去了吗(苹果短信绿色是要扣钱吗)

  • 苹果x为什么会脱胶(苹果x为什么会突然黑屏)

    苹果x为什么会脱胶(苹果x为什么会突然黑屏)

  • 移动通信的主要组成部分(移动通信的主要特点)

    移动通信的主要组成部分(移动通信的主要特点)

  • 苹果六与苹果七有什么不同(苹果六与苹果七哪个好)

    苹果六与苹果七有什么不同(苹果六与苹果七哪个好)

  • vivo手机怎么把文件移到sd(vivo手机怎么把里面的东西移到另手机里)

    vivo手机怎么把文件移到sd(vivo手机怎么把里面的东西移到另手机里)

  • 手机怎么使用谷歌搜索(手机怎么使用谷歌地图)

    手机怎么使用谷歌搜索(手机怎么使用谷歌地图)

  • 手机营业厅可以改套餐吗(手机营业厅可以修手机吗)

    手机营业厅可以改套餐吗(手机营业厅可以修手机吗)

  • 电影密钥是什么(电影密钥能破解吗)

    电影密钥是什么(电影密钥能破解吗)

  • 28磅行距是多少倍(28磅是几行)

    28磅行距是多少倍(28磅是几行)

  • 苹果xsmax保修期多久(苹果xsmax保修期内屏幕碎了)

    苹果xsmax保修期多久(苹果xsmax保修期内屏幕碎了)

  • Win11如何设置快捷键关机 Win11设置快捷键关机的方法(Win11如何设置快捷键调音量)

    Win11如何设置快捷键关机 Win11设置快捷键关机的方法(Win11如何设置快捷键调音量)

  • 无法找到脚本文件是什么意思介绍(无法找到脚本文件vbs)

    无法找到脚本文件是什么意思介绍(无法找到脚本文件vbs)

  • Win7浏览器显示“出现了运行时间错误是否调试”(win7浏览器显示证书错误怎么解决)

    Win7浏览器显示“出现了运行时间错误是否调试”(win7浏览器显示证书错误怎么解决)

  • 默认网关不在由ip地址和子网掩码定义同一网络段上(网关设置)

    默认网关不在由ip地址和子网掩码定义同一网络段上(网关设置)

  • cpqdfwag.exe是什么进程 能结束吗 cpqdfwag进程查询

    cpqdfwag.exe是什么进程 能结束吗 cpqdfwag进程查询

  • 幼儿园经营支出指什么
  • 免税农产品发票开具时税率怎么选
  • 发票来历凭证号怎么填写
  • 金税盘显示已到锁死期,未到汇总期是什么原因
  • 什么企业可以享受加计抵减
  • 私募合伙企业收到投资款后退回,支付利息的会计处理
  • 坏账准备贷方核算内容
  • 记载资金的账簿要交印花税吗
  • 计算土地增值税时增值税可以扣除吗
  • 金融负债
  • 股票质押式回购交易业务
  • 赠送样品视同销售增值税该怎么做账务处理呢?
  • 工资退回怎么处理
  • 小规模专票冲红怎么操作
  • 未交五险一金的原因
  • 以公司名义办宽带怎么办
  • 票据单据较多,费用报销单一张不够填怎么办?
  • 计算并结转本月应交的城建税700元
  • 销售净利率计算公式是什么
  • 收到质量索赔款怎么入账
  • 投资性房地产公允价值模式账务处理
  • 一般纳税人资格登记表
  • 电脑bios设置最佳性能和默认
  • w11系统有哪些新功能
  • 带息负债融资成本率意义
  • 异地提供建筑服务预缴增值税
  • 业务招待费进项税额转出表二
  • 农民专业合作社法
  • erl.exe是什么进程
  • 中拍网拍卖
  • 本部借给分公司的钱用交印花税吗
  • 商贸宝红冲和红字反冲
  • uni app面试题
  • 矿产资源补偿费与采矿权价款区别
  • 搬迁补偿费属于什么费
  • 中小企业所得税优惠政策2022
  • 先付款后收到发票怎么入账
  • 应付职工薪酬是负数是什么意思
  • gcn时间序列
  • service iptables save
  • python repeat函数
  • 服务费开增值税专用发票
  • 以前年度多缴纳房产税,可以递延次年度使用么
  • 专项资金会计和税务处理差异
  • 阶段性减免社保费政策期限延长
  • 基建账是否为可不并入大账
  • 公司找的第三方代缴社保
  • 小规模纳税人公转私技巧
  • 企业买期货账务处理
  • 制造费用如何控制
  • 出售辅助材料怎么做账
  • 社保费和公积金计提分录
  • 季度盈利弥补以前年度亏损的账务处理
  • 本年利润的年末余额
  • 旅行社开的机票款可以抵扣吗
  • 透明数据网
  • sql语句大全实例教程.pdf
  • mysql 连续日期
  • win8.1 0x80072efe
  • win8任务管理器在哪
  • Windows 7 和 Vista 下使用 Alipay 的解决方法总结
  • 电脑的本地连接在哪win10
  • 怎样彻底关闭win11安全中心
  • 2021年win10累积更新
  • centos查看硬件设备
  • linux树形结构
  • linux创建.c
  • unity3ds
  • pythonlist切片
  • 运用javascript制作网页
  • linux rsync命令详解
  • pm2启动nodejs
  • 如何用jquery
  • python 网络应用
  • jquery怎么给div赋值
  • 使用jquery计算li元素的个数
  • javascript中继承
  • 怎样在江苏智慧人社上停保
  • 回迁房需要交契税吗
  • 长沙税务查询电话
  • 免责声明:网站部分图片文字素材来源于网络,如有侵权,请及时告知,我们会第一时间删除,谢谢! 邮箱:opceo@qq.com

    鄂ICP备2023003026号

    网站地图: 企业信息 工商信息 财税知识 网络常识 编程技术

    友情链接: 武汉网站建设