精华内容
下载资源
问答
  • 2022-01-13 15:30:56

    模型

    • 分类模型反应的是在不同类别上的概率
    • 回归模型反应的推测可能值
    • 准备数据集
    • 数据集预处理
    • 模型训练
    • 模型导出
    • 模型加载部署
    • 模型预测

    封装数据集与模型训练

    
    from tensorflow.keras.datasets import mnist
    import abc
    import os
    import numpy as np
    import tensorflow as tf
    from sklearn.datasets import load_files
    from tensorflow.keras.models import load_model
    import autokeras as ak
    import requests
    import cv2
    
    
    class ABCDatasets(metaclass=abc.ABCMeta):
        """数据集抽象类,以下方法必须全部重新复写"""
    
        @property
        @abc.abstractmethod
        def load_data(self):
            """加载数据"""
            pass
    
        @property
        @abc.abstractmethod
        def train_data(self):
            """训练数据"""
            pass
    
        @property
        @abc.abstractmethod
        def test_data(self):
            """测试数据"""
            pass
    
        @property
        @abc.abstractmethod
        def label_mapping(self):
            """标签映射关系"""
            return {}
    
    
    class ABCModel(metaclass=abc.ABCMeta):
        """模型抽象类,以下方法必须全部重新复写"""
    
        @abc.abstractmethod
        def train(self):
            """训练方法"""
            pass
    
        @abc.abstractmethod
        def export_model(self, filename):
            """导出模型"""
            pass
    
        @abc.abstractmethod
        def load_model(self, filename):
            """加载模型"""
            pass
    
        @abc.abstractmethod
        def predict(self, image: list):
            """模型预测"""
            pass
    
    
    class Model(ABCModel):
        modeler: ak.AutoModel
    
        def __init__(self, datasets: ABCDatasets = None):
            self.datasets = datasets
            self.x_train, self.y_train = self.datasets.train_data
            self.x_test, self.y_test = self.datasets.test_data
            self.label_mapping = self.datasets.label_mapping
    
        def train(self):
            self.modeler.fit(self.x_train, self.y_train, epochs=1)
    
        def export_model(self, filename):
            self.modeler.export_model().save(filename)
    
        def load_model(self, filename):
            self.modeler = load_model(filename, custom_objects=ak.CUSTOM_OBJECTS)
    
        def predict(self, images: np.array):
            return self.modeler.predict(np.array(images))
    
        def post(self, predict_result):
            return
    
    
    class MnistDataSets(ABCDatasets):
        """
        配置数据集,以及标签
        """
    
        def __init__(self):
            self.x_train = self.y_train = self.x_test = self.y_test = None
    
        def load_data(self):
            """加载官方的手写数据集"""
            (self.x_train, self.y_train), (self.x_test, self.y_test) = mnist.load_data()
            # print(self.x_train.shape)
            # print(self.y_train.shape)
            # print(self.x_train[0].shape)
            # (60000, 28, 28)
            # (60000,)
            # (28, 28)
            # 这里输入可知,数据集包含了60000张图片,且素材是一个单通道28x28
    
        @property
        def label_mapping(self):
            """标签映射关系"""
            return {1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9, 0: 0}
    
        @property
        def train_data(self):
            """训练数据集"""
            return self.x_train, self.y_train
    
        @property
        def test_data(self):
            """测试数据集"""
            return self.x_test, self.y_test
    
        def get_online_test_data(self):
            """
            在线获取一张手写体图片,并做前处理
            :return:
            """
            label = 3
            url = "https://img1.baidu.com/it/u=3472197447,93830654&fm=253&fmt=auto&app=138&f=JPEG?w=500&h=281"
            image = requests.get(url).content
            nparr = np.fromstring(image, np.uint8)
            gray = cv2.imdecode(nparr, cv2.IMREAD_GRAYSCALE)
            gray = cv2.resize(gray, (28, 28))
            _, gray = cv2.threshold(gray, thresh=165, maxval=255, type=cv2.THRESH_BINARY)
            return gray, label
    
    
    class IMDBDataSets(ABCDatasets):
        """
        配置数据集,以及标签
        """
    
        def __init__(self):
            self.x_train = self.y_train = self.x_test = self.y_test = None
    
        def load_data(self):
            """加载数据"""
            dataset = tf.keras.utils.get_file(
                fname="aclImdb.tar.gz",
                origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz",
                extract=True,
            )
            IMDB_DATADIR = os.path.join(os.path.dirname(dataset), "aclImdb")
    
            self.classes = ["pos", "neg"]
            train_data = load_files(
                os.path.join(IMDB_DATADIR, "train"), shuffle=True, categories=self.classes
            )
            test_data = load_files(
                os.path.join(IMDB_DATADIR, "test"), shuffle=False, categories=self.classes
            )
            self.x_train = np.array(train_data.data)
            self.y_train = np.array(train_data.target)
            self.x_test = np.array(test_data.data)
            self.y_test = np.array(test_data.target)
            print(self.x_train[0])
            print(self.y_train[0])
            print(self.x_train.shape)
            print(self.y_train.shape)
            print(self.x_train[0].shape)
            # 第一个print输出是一个文本
            # 1
            # (25000,)
            # (25000,)
            # ()
    
        @property
        def label_mapping(self):
            """标签映射关系"""
            return {0: self.classes[0], 1: self.classes[1]}
    
        @property
        def train_data(self):
            """训练数据集"""
            return self.x_train, self.y_train
    
        @property
        def test_data(self):
            """测试数据集"""
            return self.x_test, self.y_test
    
    

    模型训练与测试

    from tools import Model, MnistDataSets  # 这里的包由上面的封装导入
    import autokeras as ak
    
    
    class ImageClassifier(Model):
        """图像分类"""
        modeler = ak.ImageClassifier(overwrite=True, max_trials=1)
    
        def post(self, predict_result):
            """后处理"""
            label_predict = []
            prob_predict = predict_result
            for img_predict in prob_predict:
                idx = img_predict.argmax()
                label_predict.append(self.label_mapping.get(idx))
            return label_predict
    
    
    class ImageRegressor(Model):
        """
        图像回归
        """
        modeler = ak.ImageRegressor(overwrite=True, max_trials=1)
    
        def post(self, predict_result):
            label_predict = []
            for img_predict in predict_result:
                label_predict.append(self.label_mapping.get(int(img_predict)))
            return label_predict
    
    
    def train_image_classifier():
        """训练数据"""
        data = MnistDataSets()
        data.load_data()
    
        model = ImageClassifier(datasets=data)
        model.train()
        model.export_model("mnist_image_classifier.h5")
    
    
    def test_image_classifier():
        """使用在线数据进行测试"""
        data = MnistDataSets()
        image, label = data.get_online_test_data()
        model = ImageClassifier(datasets=data)
        model.load_model("mnist_image_classifier.h5")
        predict_result = model.predict(images=[image])
        post_result = model.post(predict_result)[0]
        print("predict_result", predict_result)
        print(label, post_result, label == post_result)
    
    
    def train_image_regressor():
        """训练数据"""
        data = MnistDataSets()
        data.load_data()
    
        model = ImageRegressor(datasets=data)
        model.train()
        model.export_model("mnist_image_regressor.h5")
    
    
    def test_image_regressor():
        """使用在线数据进行测试"""
        data = MnistDataSets()
        image, label = data.get_online_test_data()
        model = ImageRegressor(datasets=data)
        model.load_model("mnist_image_regressor.h5")
        predict_result = model.predict(images=[image])
        post_result = model.post(predict_result)[0]
        print("predict_result", predict_result)
        print(label, post_result, label == post_result)
    
    
    if __name__ == '__main__':
        import fire
    
        fire.Fire()
    
    

    执行结果

    • 训练
    [~]# python3 model.py train_image_classifier
    Search: Running Trial #1
    
    Hyperparameter    |Value             |Best Value So Far 
    image_block_1/b...|vanilla           |?                 
    image_block_1/n...|True              |?                 
    image_block_1/a...|False             |?                 
    image_block_1/c...|3                 |?                 
    image_block_1/c...|1                 |?                 
    image_block_1/c...|2                 |?                 
    image_block_1/c...|True              |?                 
    image_block_1/c...|False             |?                 
    image_block_1/c...|0.25              |?                 
    image_block_1/c...|32                |?                 
    image_block_1/c...|64                |?                 
    classification_...|flatten           |?                 
    classification_...|0.5               |?                 
    optimizer         |adam              |?                 
    learning_rate     |0.001             |?                 
    
    1500/1500 [==============================] - 76s 50ms/step - loss: 0.1742 - accuracy: 0.9471 - val_loss: 0.0739 - val_accuracy: 0.9791
    ...
    
    
    • 测试
    [~]# python3 model.py test_image_classifier
    predict_result [[3.1589475e-04 3.8880799e-02 5.0686980e-03 9.2180651e-01 9.0317568e-03
      2.1918179e-02 9.9024124e-05 3.8853439e-05 2.5504678e-03 2.8968096e-04]]
    3 3 True
    
    
    更多相关内容
  • AutoKeras:基于Keras的AutoML系统。 它是由德克萨斯农工大学的开发的。 AutoKeras的目标是使所有人都能使用机器学习。 例 这是使用该软件包的简短示例。 import autokeras as ak clf = ak . ImageClassifier () ...
  • autokeras:软件包:AutoKeras的R接口
  • autokeras应用系列(1):入门测试-附件资源
  • AutoKeras

    千次阅读 2018-11-23 14:29:28
    http://nooverfit.com/wp/autokeras%EF%BC%9A%E5%BC%80%E6%BA%90automl%E5%88%9D%E4%BD%93%E9%AA%8C%EF%BC%8C%E8%87%AA%E5%8A%A8%E6%90%9C%E7%B4%A2%E5%92%8C%E6%9E%84%E5%BB%BA%E6%9C%80%E4%BC%98%E6%B7%B1%E5%B...

    来源:http://nooverfit.com/wp/autokeras%EF%BC%9A%E5%BC%80%E6%BA%90automl%E5%88%9D%E4%BD%93%E9%AA%8C%EF%BC%8C%E8%87%AA%E5%8A%A8%E6%90%9C%E7%B4%A2%E5%92%8C%E6%9E%84%E5%BB%BA%E6%9C%80%E4%BC%98%E6%B7%B1%E5%BA%A6%E6%A8%A1%E5%9E%8B/

    自动搜索构建深度学习模型和调参一直是数据科学家们向往的工具,而我们知道Google AI发布的AutoML是要收费的,如果想要开源的而且想要对AutoML背后技术一探究竟的,可以看看这款AutoKeras

    AutoKeras开发处于初期阶段,它基于Keras(也有pytorch),而keras我们知道是基于TensorFlow,所以GPU利用可以不用担心(只要你安装了gpu版TensorFlow即可)。由于Keras代码极其简洁,autokeras上手也较容易 。

    所以直接上autokeras版mnist训练代码:

    1. from keras.datasets import mnist
    2. from autokeras.image_classifier import ImageClassifier
    3.  
    4. if __name__ == '__main__':
    5. (x_train, y_train), (x_test, y_test) = mnist.load_data()
    6. x_train = x_train.reshape(x_train.shape + (1,))
    7. x_test = x_test.reshape(x_test.shape + (1,))
    8.  
    9. clf = ImageClassifier(verbose=True, augment=False)
    10. clf.fit(x_train, y_train, time_limit=12 * 60 * 60)
    11. clf.final_fit(x_train, y_train, x_test, y_test, retrain=True)
    12. y = clf.evaluate(x_test, y_test)
    13. print(y * 100)

    这里有几个要点,第一,代码需要在python3.6上跑否则会有兼容性问题(目前autokeras只支持python3.6),第二,如果不做特殊设置,autokeras会占满的你GPU或CPU:

    所以训练前最好先选一个配置稍高的机子。

    为什么会这么耗资源?其实底层autokeras用的技术正是类似传统机器学习的网格搜索和贝叶斯搜索

    你也许还不知道的是,这个和搜索马航失联飞机的技术也是想通的:

    这一类搜索的本质是,在搜索的同时,会有反馈数据。在你搜索更好的深度学习模型时,你会做一些假设(模型加入更多“跳层连接”可能会更好),这时算法会加入一些“跳层连接”去训练,并验证假设,再进行下一步搜索 (这正是耗计算资源的原因)。

    当然autokeras还有许多可以改进的地方,比如对上述贝叶斯搜索的改进,以及支持除了卷积网络以外的RNN等其他结构的搜索功能等等。期待autokeras能有较快的跟进,github源码读者可以在参考文献中找到。

    欢迎和David 9 互动讨论,联系方式你懂得,在文末扫二维码。

     

    参考文献:

    1. https://github.com/jhfjhfj1/autokeras
    2. https://autokeras.com/
    3. https://towardsdatascience.com/auto-keras-or-how-you-can-create-a-deep-learning-model-in-4-lines-of-code-b2ba448ccf5e
    4. https://www.quora.com/What-is-the-probability-to-find-Malaysia-Airlines-MH370-using-Bayesian-search-theory
    5. http://soubhikbarari.github.io/blog/2016/09/14/overview-of-bayesian-optimization
    展开全文
  • Auto-Keras:基于Keras的AutoML机器学习自动化库
  • AutoKeras记录

    千次阅读 2019-05-28 16:53:00
    clf.export_autokeras_model(r'autokeras\test\test.h5')   3、可视化 运行 examples / visualize.py ,同时传递与参数相同的 路径 from examples.visualizations import visualize if __name__ == '_...

    缺点:

    1. 有一定规模的模型,训练一遍都需要几天甚至几个月的时间,根本不可能试验几十几百组参数比较性能。
    2. 就算试了几十几百组参数,最好的那组参数的性能可能并不比最初拍脑袋选的参数的性能好太多。
    3. 有很多理论都证明不存在一种通用的算法能解决所有问题。

     

     

    目标:

    1)定义 体系

    2)实验调整一组超参数

     

    本质:通过使用自动神经架构搜索(NAS)算法降低进入机器学习和深度学习的门槛。

     

     

    要求:

    python3.6

    keras

    tensorflow 

    pytorch

    注意:autokeras依赖fork,os.fork()无法在windows上运行

    从GitHub存储库下载代码并在项目目录中运行以下命令。

    pip install -r requirements.txt
    python setup.py install
    

     

    Auto-Keras好是好,也确实很方便,但一个mnist数据集在gpu服务器上还是跑了1天才出结果,速度还是太慢了一些,同时针对这么简单的数据集最后的网络结构也相对复杂,并且最后的结果还是比不上一些大牛们搭建出来的网络。

     

     

    第一次玩的时候居然没发现它的Docker:

    Auto-Keras Docker

    下载Auto-Keras Docker镜像

    以下命令将Auto-Keras泊坞窗映像下载到您的计算机。

    docker pull garawalid/autokeras:latest
    

    图像版本使用以下格式标记:

    标签描述
    最新自动Keras图像
    devel的跟踪Github存储库的Auto-Keras图像

    启动Auto-Keras Docker容器

    docker run -it --shm-size 2G garawalid/autokeras /bin/bash
    

    如果您需要更多内存来运行容器,请更改值shm-size。(Docker运行参考

    运行申请:

    file.py在容器中使用Auto-Keras 运行本地脚本,请装入主机目录-v hostDir:/app

    docker run -it -v hostDir:/app --shm-size 2G garawalid/autokeras python file.py
    

    示例:

    让我们下载mnist示例并在容器中运行它。

    下载示例:

    curl https://raw.githubusercontent.com/keras-team/autokeras/master/examples/a_simple_example/mnist.py --output mnist.py
    

    运行mnist示例:

    docker run -it -v "$(pwd)":/app --shm-size 2G garawalid/autokeras python mnist.py

     

     

     

     

     

     

     

    简单的minist的训练教程:

    from keras.datasets import mnist
    from autokeras import ImageClassifier
    
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    x_train = x_train.reshape(x_train.shape + (1,))
    x_test = x_test.reshape(x_test.shape + (1,))
    
    clf = ImageClassifier(verbose=True)
    clf.fit(x_train, y_train, time_limit=1*60*60)
    clf.final_fit(x_train, y_train, x_test, y_test, retrain=True)
    y = clf.evaluate(x_test, y_test)
    print(y)
    clf.load_searcher().load_best_model().produce_keras_model().save(r'autokeras\test\test.h5')

    服务器打印的细节:

     

    库介绍:

    (1)fit方法
    fit方法用于寻找最优的网络结构并且加以训练,这个函数会基于给定的数据集,为该数据集找到最佳的神经网络结构,数据集的格式为Numpy数据型,训练数据需要通过x_train,y_train传递。
    
    参数列表:
    x:
    一个Numpy数组的实例,包含了训练数据或者是训练数据与验证数据结合的数据
    y:
    一个Numpy数组的实例,包含了训练数据的标签或者是训练标签与验证标签结合的数据
    x_test:
    一个Numpy数组,包含了测试数据
    y_test :
    一个Numpy数组,包含了测试数据的标签
    time_limit:
    搜索网络的时间限制
    
    (2)final_fit方法
    final_fit方法用于找到最优网络后做最后的训练。
    
    参数列表:
    x:
    一个Numpy数组的实例,包含了训练数据或者是训练数据与验证数据结合的数据
    y:
    一个Numpy数组的实例,包含了训练数据的标签或者是训练标签与验证标签结合的数据
    x_test:
    一个Numpy数组,包含了测试数据
    y_test :
    一个Numpy数组,包含了测试数据的标签
    trainer_args:
    一个包含了ModelTrainer结构参数的字典
    retrain:
    一个布尔值,用来决定是否重新初始化模型的权重参数
    (3)predict方法
    predict方法用来测试数据的预测值。
    
    参数列表:
    x_test:
    一个Numpy数组,包含了测试数据
    (4)evaluate方法
    evaluate方法用来在预测值和实际值之间评估模型的精度。
    
    参数列表:
    x_test:
    一个Numpy数组,包含了测试数据
    y_test :
    一个Numpy数组,包含了测试数据的标签
    
    
    
    
    (1)init方法
    init方法用于初始化实例,当‘resume’参数为True时,分类器将会从‘path’中加载,否则就会创建一个新的模型。
    
    参数列表:
    verbose:
    一个布尔值决定了网络的搜索过程是否会被打印至输出设备
    path:
    一个字符串,中间结果的存储位置
    resume:
    一个布尔值,当为‘True’时,模型会从存储在path目录下的模型继续搜索,如果为‘False’,则会开始一个新的搜索。
    searcher_args :
    一个包含了搜索器初始化函数参数的字典
    (2)final_fit方法
    final_fit方法用于找到最优网络后做最后的训练。
    
    参数列表:
    x:
    一个Numpy数组的实例,包含了训练数据或者是训练数据与验证数据结合的数据
    y:
    一个Numpy数组的实例,包含了训练数据的标签或者是训练标签与验证标签结合的数据
    x_test:
    一个Numpy数组,包含了测试数据
    y_test :
    一个Numpy数组,包含了测试数据的标签
    trainer_args:
    一个包含了ModelTrainer结构参数的字典
    retrain:
    一个布尔值,用来决定是否重新初始化模型的权重参数
    (3)export_keras_model方法
    用来导出Auto-Keras训练好的模型
    
    (4)predict方法
    predict方法用来测试数据的预测值。
    
    参数列表:
    x_test:
    一个Numpy数组,包含了测试数据
    (5)evaluate方法
    evaluate方法用来在预测值和实际值之间评估模型的精度。
    
    参数列表:
    x_test:
    一个Numpy数组,包含了测试数据
    y_test :
    一个Numpy数组,包含了测试数据的标签
    
    
    
    
    (1)init方法
    image_supervised包含的各种分类器和回归器如下:
    
    名 称    作 用
    ImageClassifier    用于图像的分类,它会自动为当前数据集搜索最优的卷积网络结构
    ImageClassifier1D  用于1D图像的分类,它会自动为当前数据集搜索最优的卷积网络结构
    ImageClassifier3D  用于3D图像的分类,它会自动为当前数据集搜索最优的卷积网络结构
    ImageRegressor 用于图像的回归,它会自动为当前数据集搜索最优的卷积网络结构
    ImageRegressor1D   用于1D图像的回归,它会自动为当前数据集搜索最优的卷积网络结构
    ImageRegressor3D   用于3D图像的回归,它会自动为当前数据集搜索最优的卷积网络结构
    初始化以上方法的参数列表如下:
    
    path:
    存储分类器模型和中间结果的路径
    cnn:
    定义在net_moudle.py中的卷积网络模型
    y_encoder:
    标签编码器,用于把标签转换为one-hot矩阵
    data_transformer:
    一个数据处理的转换类,可以在ImageDataTransformer找到使用示例
    verbose:
    一个布尔值,决定了搜索过程是否要打印到输出设备
    augment:
    一个布尔值,决定了是否需要扩增数据,如果没有定义则会使用Constant.DATA_AUGMENTATION的值
    searcher_args:
    一个包含了搜索器初始化函数参数的字典
    resize_height:
    调整图像数据的高度
    resize_width:
    调整图像数据的宽度
    (2)read_images方法
    read_images方法用于用于从目录下读取图像数据并返回它们的Numpy数组实例。
    
    参数列表:
    img_file_names:
    图像名称的列表
    images_dir_path:
    存储图像的路径
    (3)load_image_dataset方法
    load_image_dataset方法从CSV文件中读取图像的文件和标签名。这个CSV文件应该包含两列数据,分别为“文件名”和“标签”。
    
    参数列表:
    csv_file_path:
    CSV文件的位置
    images_path:
    图像的存储路径
    (4)export_keras_model方法
    用来导出Auto-Keras训练好的模型
    
    (5)evaluate方法
    

     

     

    总结:

    1、训练需要的数据格式numpy数组

    def load_images():
        x_train, y_train = load_image_dataset(csv_file_path="train/label.csv",
                                              images_path="train")
        print(x_train.shape)
        print(y_train.shape)
    
        x_test, y_test = load_image_dataset(csv_file_path="test/label.csv",
                                            images_path="test")
        print(x_test.shape)
        print(y_test.shape)
        return x_train, y_train, x_test, y_test

    自身携带的 load_image_dataset函数可以读取csv数据转化为numpy数组结构 

    2、训练模块:

    clf = ImageClassifier(verbose=True, augment=False)
    clf.fit(x_train, y_train, time_limit=12 * 60 * 60)

    二次训练模块:

    clf = ImageClassifier(verbose=True, augment=False)
    clf.final_fit(x_train, y_train, x_test, y_test, retrain=True)

    评估模块:

    y = clf.evaluate(x_test, y_test)
    print(y * 100)

    导出模型模型

    clf.export_autokeras_model(r'autokeras\test\test.h5')

     

    3、可视化

    运行examples / visualize.py,同时传递与参数相同的路径

    from examples.visualizations import visualize
    if __name__ == '__main__':
        visualize(r'autokeras\test\test.h5')

    从examples文件中 输出visualize函数

     

    4、使用MLP构造网络

    import numpy as np
    from keras.datasets import mnist
    
    from autokeras import MlpModule
    from autokeras.backend.torch.loss_function import classification_loss
    from autokeras.nn.metric import Accuracy
    from autokeras.preprocessor import OneHotEncoder
    from autokeras.backend.torch import DataTransformerMlp
    
    
    def transform_y(y_train):
        # Transform y_train.
        y_encoder = OneHotEncoder()
        y_encoder.fit(y_train)
        y_train = y_encoder.transform(y_train)
        return y_train, y_encoder
    
    
    if __name__ == '__main__':
        (x_train, y_train), (x_test, y_test) = mnist.load_data()
        x_train = np.squeeze(x_train.reshape((x_train.shape[0], -1)))
        x_test = np.squeeze(x_test.reshape((x_test.shape[0], -1)))
        y_train, y_encoder = transform_y(y_train)
        y_test, _ = transform_y(y_test)
        mlpModule = MlpModule(loss=classification_loss, metric=Accuracy, searcher_args={}, verbose=True)
        #模型实例化,设置loss  ,metric类型,verbose表示是否打印
    
        # specify the fit args
        data_transformer = DataTransformerMlp(x_train)
        train_data = data_transformer.transform_train(x_train, y_train)
        test_data = data_transformer.transform_test(x_test, y_test)
    
        #模型传参数设置  n_output_node:输出类别数(节点数) 、 input_shape:训练入输入图片大小 、train_data/test_data:训练数据和训练数据来源
        fit_args = {
            "n_output_node": y_encoder.n_classes,
            "input_shape": x_train.shape,
            "train_data": train_data,
            "test_data": test_data
        }
        mlpModule.fit(n_output_node=fit_args.get("n_output_node"),
                      input_shape=fit_args.get("input_shape"),
                      train_data=fit_args.get("train_data"),
                      test_data=fit_args.get("test_data"),
                      time_limit=24 * 60 * 60)
        #最终测试
        mlpModule.final_fit(train_data=fit_args.get("train_data"),
                      test_data=fit_args.get("test_data"),
                            retrain=True)

     

    其中Loss类型有:

    import torch
    
    
    def classification_loss(prediction, target):
        labels = target.argmax(1)
        return torch.nn.CrossEntropyLoss()(prediction, labels)
    
    
    def regression_loss(prediction, target):
        return torch.nn.MSELoss()(prediction, target.float())
    
    
    def binary_classification_loss(prediction, label):
        return torch.nn.BCELoss()(prediction, label)
    

     

    metric类型有:

    from abc import abstractmethod
    from autokeras.backend import Backend
    from sklearn.metrics import accuracy_score, mean_squared_error
    
    
    class Metric:
    
        @classmethod
        @abstractmethod
        def higher_better(cls):
            pass
    
        @classmethod
        @abstractmethod
        def compute(cls, prediction, target):
            pass
    
        @classmethod
        @abstractmethod
        def evaluate(cls, prediction, target):
            pass
    
    
    class Accuracy(Metric):
        @classmethod
        def higher_better(cls):
            return True
    
        @classmethod
        def compute(cls, prediction, target):
            return Backend.classification_metric(prediction, target)
    
        @classmethod
        def evaluate(cls, prediction, target):
            return accuracy_score(target, prediction)
    
    
    class MSE(Metric):
        @classmethod
        def higher_better(cls):
            return False
    
        @classmethod
        def compute(cls, prediction, target):
            return Backend.regression_metric(prediction, target)
    
        @classmethod
        def evaluate(cls, prediction, target):
            return mean_squared_error(target, prediction)
    

     

    4、CNN模块和Resnet模块

    cnn模块类似MLP模块:

    from keras.datasets import mnist
    from autokeras import CnnModule
    from autokeras.backend.torch.loss_function import classification_loss
    from autokeras.nn.metric import Accuracy
    from autokeras.preprocessor import OneHotEncoder
    from autokeras.backend.torch import ImageDataTransformer
    
    
    def transform_y(y_train):
        # Transform y_train.
        y_encoder = OneHotEncoder()
        y_encoder.fit(y_train)
        y_train = y_encoder.transform(y_train)
        return y_train, y_encoder
    
    
    if __name__ == '__main__':
        (x_train, y_train), (x_test, y_test) = mnist.load_data()
        x_train = x_train.reshape(x_train.shape + (1,))
        x_test = x_test.reshape(x_test.shape + (1,))
        y_train, y_encoder = transform_y(y_train)
        y_test, _ = transform_y(y_test)
        cnnModule = CnnModule(loss=classification_loss, metric=Accuracy, searcher_args={}, verbose=True)
        # specify the fit args
        data_transformer = ImageDataTransformer(x_train, augment=True)
        train_data = data_transformer.transform_train(x_train, y_train)
        test_data = data_transformer.transform_test(x_test, y_test)
        fit_args = {
            "n_output_node": y_encoder.n_classes,
            "input_shape": x_train.shape,
            "train_data": train_data,
            "test_data": test_data
        }
        cnnModule.fit(n_output_node=fit_args.get("n_output_node"),
                      input_shape=fit_args.get("input_shape"),
                      train_data=fit_args.get("train_data"),
                      test_data=fit_args.get("test_data"),
                      time_limit=24 * 60 * 60)

     

    6、归纳、还可以做的任务:

    1)自动文本分类:TextClassifier

    2)官方文档 显示还有很多,但API对不上,感觉牛唇不对马嘴,以后再看看

     

    附上文档地址:https://autokeras.com/temp/search/

     

     

    总结:不够成熟,工程意义不大

     

    展开全文
  • autokeras安装说明(Python)

    千次阅读 2020-06-27 11:43:29
    安装autokeras时出现,报错:ModuleNotFoundError: No module named ‘kerastuner’ – Autokeras==1.0.3 这是个坑,你的cuda算力要大于3.5,正确安装方法为: 一、先用这行代码安装 pip install git+...

    安装autokeras时出现,报错:ModuleNotFoundError: No module named ‘kerastuner’ – Autokeras==1.0.3
    这是个坑,你的cuda算力要大于3.5,正确安装方法为:
    一、先用这行代码安装
    2020年06月28号更新,试验过,后面加豆瓣源就快了!

    pip install git+https://github.com/keras-team/keras-tuner.git@1.0.2rc0 -i https://pypi.douban.com/simple
    

    二、安装autokeras

    pip install autokeras==1.0.3 -i https://pypi.douban.com/simple
    

    加了清华源,安装很快。

    三、说scipy版本不匹配,再安装对应的版本

    pip install scipy==1.4.1 -i https://pypi.douban.com/simple
    

    2020年7月4号更新,此问题已经被修复,默认安装1.4.1版本了。

    四、安装tensorflow-gpu=2.2.0

    pip install tensorflow-gpu==2.2.0 -i https://pypi.tuna.tsinghua.edu.cn/simple
    

    因为conda安装时最大版本为2.1.0,所以采用pip安装,并用清华源。
    也可以使用豆瓣源

    pip install tensorflow-gpu==2.2.0 -i https://pypi.douban.com/simple
    

    五、安装cuda

    conda install cudatoolkit=10.1 cudnn=7.6.5
    

    这样就好了,运行例子测试

    from tensorflow.keras.datasets import mnist
    
    import autokeras as ak
    import os
    
    # 指定第1号GPU训练
    os.environ["CUDA_VISIBLE_DEVICES"] = "1"  #指定第1个gpu, 值从0开始
    
    # Prepare the dataset.
    (x_train, y_train), (x_test, y_test) = mnist.load_data()
    print(x_train.shape)  # (60000, 28, 28)
    print(y_train.shape)  # (60000,)
    print(y_train[:3])  # array([7, 2, 1], dtype=uint8)
    
    # Initialize the ImageClassifier.
    clf = ak.ImageClassifier(max_trials=3)
    # Search for the best model.
    clf.fit(x_train, y_train, epochs=10)
    # Evaluate on the testing data.
    print('Accuracy: {accuracy}'.format(
        accuracy=clf.evaluate(x_test, y_test)))
    

    结果为:

    2020-06-27 10:58:33.462553: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
    (60000, 28, 28)
    (60000,)
    [5 0 4]
    2020-06-27 10:58:36.383872: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library nvcuda.dll
    2020-06-27 10:58:36.408106: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: 
    pciBusID: 0000:01:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
    coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 11.00GiB deviceMemoryBandwidth: 451.17GiB/s
    2020-06-27 10:58:36.408509: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
    2020-06-27 10:58:36.413078: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
    2020-06-27 10:58:36.417132: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
    2020-06-27 10:58:36.418835: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
    2020-06-27 10:58:36.423328: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
    2020-06-27 10:58:36.426002: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
    2020-06-27 10:58:36.434870: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
    2020-06-27 10:58:36.435602: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
    2020-06-27 10:58:36.435996: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
    2020-06-27 10:58:36.443185: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2050d20ef80 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
    2020-06-27 10:58:36.443489: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): Host, Default Version
    2020-06-27 10:58:36.444072: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1561] Found device 0 with properties: 
    pciBusID: 0000:01:00.0 name: GeForce GTX 1080 Ti computeCapability: 6.1
    coreClock: 1.582GHz coreCount: 28 deviceMemorySize: 11.00GiB deviceMemoryBandwidth: 451.17GiB/s
    2020-06-27 10:58:36.444473: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_101.dll
    2020-06-27 10:58:36.444679: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
    2020-06-27 10:58:36.444878: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cufft64_10.dll
    2020-06-27 10:58:36.445075: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library curand64_10.dll
    2020-06-27 10:58:36.445271: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusolver64_10.dll
    2020-06-27 10:58:36.445472: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cusparse64_10.dll
    2020-06-27 10:58:36.445995: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
    2020-06-27 10:58:36.446630: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1703] Adding visible gpu devices: 0
    2020-06-27 10:58:37.011125: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1102] Device interconnect StreamExecutor with strength 1 edge matrix:
    2020-06-27 10:58:37.011341: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1108]      0 
    2020-06-27 10:58:37.011467: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1121] 0:   N 
    2020-06-27 10:58:37.012205: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1247] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 8685 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0, compute capability: 6.1)
    2020-06-27 10:58:37.014994: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x2054cf68070 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
    2020-06-27 10:58:37.015271: I tensorflow/compiler/xla/service/service.cc:176]   StreamExecutor device (0): GeForce GTX 1080 Ti, Compute Capability 6.1
    [Starting new trial]
    Epoch 1/10
    2020-06-27 10:58:39.905392: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
    2020-06-27 10:58:40.120250: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudnn64_7.dll
    2020-06-27 10:58:49.373670: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] Internal: Invoking GPU asm compilation is supported on Cuda non-Windows platforms only
    Relying on driver to perform ptx compilation. 
    Modify $PATH to customize ptxas location.
    This message will be only logged once.
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.1756 - accuracy: 0.9475 - val_loss: 0.0648 - val_accuracy: 0.9820
    Epoch 2/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.0769 - accuracy: 0.9757 - val_loss: 0.0531 - val_accuracy: 0.9843
    Epoch 3/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.0619 - accuracy: 0.9806 - val_loss: 0.0535 - val_accuracy: 0.9850
    Epoch 4/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.0549 - accuracy: 0.9826 - val_loss: 0.0445 - val_accuracy: 0.9875
    Epoch 5/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.0464 - accuracy: 0.9852 - val_loss: 0.0485 - val_accuracy: 0.9876
    Epoch 6/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.0443 - accuracy: 0.9860 - val_loss: 0.0456 - val_accuracy: 0.9872
    Epoch 7/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.0373 - accuracy: 0.9875 - val_loss: 0.0430 - val_accuracy: 0.9882
    Epoch 8/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.0367 - accuracy: 0.9881 - val_loss: 0.0415 - val_accuracy: 0.9877
    Epoch 9/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.0355 - accuracy: 0.9882 - val_loss: 0.0392 - val_accuracy: 0.9896
    Epoch 10/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.0303 - accuracy: 0.9902 - val_loss: 0.0410 - val_accuracy: 0.9893
    [Trial complete]
    [Trial summary]
     |-Trial ID: 833421c38ba01d6bf17b9d86317d04e7
     |-Score: 0.03915029019117355
     |-Best step: 8
     > Hyperparameters:
     |-classification_head_1/dropout_rate: 0.5
     |-classification_head_1/spatial_reduction_1/reduction_type: flatten
     |-image_block_1/augment: False
     |-image_block_1/block_type: vanilla
     |-image_block_1/conv_block_1/dropout_rate: 0.25
     |-image_block_1/conv_block_1/filters_0_0: 32
     |-image_block_1/conv_block_1/filters_0_1: 64
     |-image_block_1/conv_block_1/kernel_size: 3
     |-image_block_1/conv_block_1/max_pooling: True
     |-image_block_1/conv_block_1/num_blocks: 1
     |-image_block_1/conv_block_1/num_layers: 2
     |-image_block_1/conv_block_1/separable: False
     |-image_block_1/normalize: True
     |-optimizer: adam
    [Starting new trial]
    Epoch 1/10
    1500/1500 [==============================] - 56s 37ms/step - loss: 2.1607 - accuracy: 0.2592 - val_loss: 2.0754 - val_accuracy: 0.3126
    Epoch 2/10
    1500/1500 [==============================] - 56s 38ms/step - loss: 1.9579 - accuracy: 0.3263 - val_loss: 3.0966 - val_accuracy: 0.2032
    Epoch 3/10
    1500/1500 [==============================] - 57s 38ms/step - loss: 1.8847 - accuracy: 0.3406 - val_loss: 7.0512 - val_accuracy: 0.2048
    Epoch 4/10
    1500/1500 [==============================] - 57s 38ms/step - loss: 1.7557 - accuracy: 0.3854 - val_loss: 1.3057 - val_accuracy: 0.5654
    Epoch 5/10
    1500/1500 [==============================] - 57s 38ms/step - loss: 1.5620 - accuracy: 0.4529 - val_loss: 1.3604 - val_accuracy: 0.5273
    Epoch 6/10
    1500/1500 [==============================] - 56s 37ms/step - loss: 1.8959 - accuracy: 0.3353 - val_loss: 1.5162 - val_accuracy: 0.4468
    Epoch 7/10
    1500/1500 [==============================] - 56s 38ms/step - loss: 1.4665 - accuracy: 0.4804 - val_loss: 1.4122 - val_accuracy: 0.5583
    Epoch 8/10
    1500/1500 [==============================] - 56s 37ms/step - loss: 1.3273 - accuracy: 0.5353 - val_loss: 1.0694 - val_accuracy: 0.6227
    Epoch 9/10
    1500/1500 [==============================] - 56s 37ms/step - loss: 1.2512 - accuracy: 0.5603 - val_loss: 7.5444 - val_accuracy: 0.6159
    Epoch 10/10
    1500/1500 [==============================] - 57s 38ms/step - loss: 1.3119 - accuracy: 0.5441 - val_loss: 1.1232 - val_accuracy: 0.6106
    [Trial complete]
    [Trial summary]
     |-Trial ID: 6f2e6be1e88f0a039f931bf22a28ce7d
     |-Score: 1.0693740844726562
     |-Best step: 7
     > Hyperparameters:
     |-classification_head_1/dropout_rate: 0
     |-image_block_1/augment: True
     |-image_block_1/block_type: resnet
     |-image_block_1/image_augmentation_1/horizontal_flip: True
     |-image_block_1/image_augmentation_1/vertical_flip: True
     |-image_block_1/normalize: True
     |-image_block_1/res_net_block_1/conv3_depth: 4
     |-image_block_1/res_net_block_1/conv4_depth: 6
     |-image_block_1/res_net_block_1/pooling: avg
     |-image_block_1/res_net_block_1/version: v2
     |-optimizer: adam
    [Starting new trial]
    Epoch 1/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.4535 - accuracy: 0.8895 - val_loss: 0.1098 - val_accuracy: 0.9680
    Epoch 2/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.1711 - accuracy: 0.9501 - val_loss: 0.0778 - val_accuracy: 0.9762
    Epoch 3/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.1494 - accuracy: 0.9560 - val_loss: 0.0721 - val_accuracy: 0.9796
    Epoch 4/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.1334 - accuracy: 0.9606 - val_loss: 0.0944 - val_accuracy: 0.9730
    Epoch 5/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.1341 - accuracy: 0.9616 - val_loss: 0.0667 - val_accuracy: 0.9808
    Epoch 6/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.1224 - accuracy: 0.9647 - val_loss: 0.0612 - val_accuracy: 0.9812
    Epoch 7/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.1138 - accuracy: 0.9667 - val_loss: 0.0740 - val_accuracy: 0.9799
    Epoch 8/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.1087 - accuracy: 0.9682 - val_loss: 0.0633 - val_accuracy: 0.9818
    Epoch 9/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.1101 - accuracy: 0.9675 - val_loss: 0.0536 - val_accuracy: 0.9838
    Epoch 10/10
    1500/1500 [==============================] - 6s 4ms/step - loss: 0.1075 - accuracy: 0.9691 - val_loss: 0.0574 - val_accuracy: 0.9841
    [Trial complete]
    [Trial summary]
     |-Trial ID: 15ba5def5f292232619aac183fac037e
     |-Score: 0.053594209253787994
     |-Best step: 8
     > Hyperparameters:
     |-classification_head_1/dropout_rate: 0.5
     |-classification_head_1/spatial_reduction_1/reduction_type: flatten
     |-image_block_1/augment: False
     |-image_block_1/block_type: vanilla
     |-image_block_1/conv_block_1/dropout_rate: 0.5
     |-image_block_1/conv_block_1/filters_0_0: 64
     |-image_block_1/conv_block_1/filters_0_1: 32
     |-image_block_1/conv_block_1/filters_1_0: 64
     |-image_block_1/conv_block_1/filters_1_1: 256
     |-image_block_1/conv_block_1/kernel_size: 5
     |-image_block_1/conv_block_1/max_pooling: False
     |-image_block_1/conv_block_1/num_blocks: 1
     |-image_block_1/conv_block_1/num_layers: 2
     |-image_block_1/conv_block_1/separable: False
     |-image_block_1/image_augmentation_1/horizontal_flip: True
     |-image_block_1/image_augmentation_1/vertical_flip: True
     |-image_block_1/normalize: False
     |-image_block_1/res_net_block_1/conv3_depth: 8
     |-image_block_1/res_net_block_1/conv4_depth: 23
     |-image_block_1/res_net_block_1/pooling: avg
     |-image_block_1/res_net_block_1/version: next
     |-optimizer: adam
    1875/1875 [==============================] - 5s 3ms/step - loss: 0.1586 - accuracy: 0.9519
    313/313 [==============================] - 1s 2ms/step - loss: 0.0531 - accuracy: 0.9810
    Accuracy: [0.05307671055197716, 0.9810000061988831]
    WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
    WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
    WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
    WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
    WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
    WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
    WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.iter
    WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_1
    WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.beta_2
    WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.decay
    WARNING:tensorflow:Unresolved object in checkpoint: (root).optimizer.learning_rate
    WARNING:tensorflow:A checkpoint was restored (e.g. tf.train.Checkpoint.restore or tf.keras.Model.load_weights) but not all checkpointed values were used. See above for specific issues. Use expect_partial() on the load status object, e.g. tf.train.Checkpoint.restore(...).expect_partial(), to silence these warnings, or use assert_consumed() to make the check explicit. See https://www.tensorflow.org/guide/checkpoint#loading_mechanics for details.
    
    Process finished with exit code 0
    
    

    完成!

    本站所有文章均为原创,欢迎转载,请注明文章出处:https://blog.csdn.net/weixin_45092662。百度和各类采集站皆不可信,搜索请谨慎鉴别。技术类文章一般都有时效性,本人习惯不定期对自己的博文进行修正和更新,因此请访问出处以查看本文的最新版本。

    展开全文
  • 偶然翻keras翻到的封装版autokeras,按照分类器做: 1. 4行代码图像分类距离 """ autokeras ,keras ecology Target: construct a fast & simple nn """ import autokeras as ak from tensorflow.keras.datasets...
  • 如何使用AutoKeras进行分类 观看视频 或点击下面的图片 想更加了解我吗? 跟着我
  • Anaconda虚拟环境AutoKeras安装

    千次阅读 2022-03-14 12:24:32
    = 2.3.0:AutoKeras 基于 TensorFlow。 2.安装AutoKeras 关于如何安装Anaconda3,请参阅:史上最详细Anoconda下安装Tensorflow-GPU和Pytorch对应版本(手把手保姆级教程)_fangshuo_light的博客-CSDN博客 关于...
  • 资源分类:Python库 所属语言:Python 资源全名:autokeras-0.2.4.tar.gz 资源来源:官方 安装方法:https://lanzao.blog.csdn.net/article/details/101784059
  • 概述 是一个在keras基础上的自动机器学习库 模型训练模型门槛较低,每个...[~]# pip install autokeras 几行代码训练一个模型 import autokeras as ak clf = ak.ImageClassifier() clf.fit(x_train, y_train) results
  • AutoKeras:基于 Keras 的 AutoML 系统。它由德克萨斯 A&M 大学的DATA实验室开发。AutoKeras 的目标是让每个人都可以使用机器学习。 它提供了一种简单而有效的方法,可以自动为各种预测建模任务找到性能最佳的...
  • autokeras安装踩的坑!

    2021-05-22 19:47:00
    autokeras的安装踩了不少坑,差点把我搞吐了都,记录一下! 1. 首先在autokeras的官网https://autokeras.com/install/,上看见autokeras安装要求:python3,tensorflow >=2.3.0 2. 再到...
  • autokeras报错

    2019-08-29 17:13:53
    import autokeras报错信息如下: ModuleNotFoundError: No module named 'kerastuner.tuners' 解决方案如下: git clonehttps://github.com/keras-team/keras-tuner cd keras-tuner pip install . ...
  • 我最初尝试设置成224*224,但发现后来运行autokeras时,抛出了"RuntimeError: CUDA error: out of memory"的错误,autokeras是基于pyTorch,我觉得pyTorch对于内存的利用上需要优化下,同样的数据集我在基于...
  • 师兄留了要学习automl的任务,就从autoKeras开始入手,在本篇记录一下一些入门的基础知识,仅供和我一样对ml和dl都毫无基础的小白。
  • AutoKeras使用

    千次阅读 2018-10-19 15:50:24
    我最初尝试设置成224*224,但发现后来运行autokeras时,抛出了"RuntimeError: CUDA error: out of memory"的错误,autokeras是基于pyTorch,我觉得pyTorch对于内存的利用上需要优化下,同样的数据集我在基于...
  • 网上之前用export_autokeras_model()现在好像用不了了。 官方已经优化了他的导出: model = clf.export_model() try: model.save("model_autokeras", save_format="tf") except: model.save("model_autokeras.h5...
  • DL框架之AutoKeras框架:深度学习框架AutoKeras框架的简介、特点、安装、使用方法详细攻略 Paper:《Efficient Neural Architecture Search via Parameter Sharing》 目录 AutoKeras框架的简介 AutoKeras...
  • import os import numpy as np import pandas as pd import tensorflow as tf from keras import backend as K import matplotlib.pyplot as plt from sklearn import preprocessing from autokeras import ...
  • autokeras 安装

    2019-08-27 14:59:12
    在pycharm 中安装autokeras api 接口,因为autokeras 目前只支持python 3.6版本,但电脑上装的是37版本,因此首先要降低python版本 1.下载Anaconda,安装完Anaconda python 3.7后,在pycharm 中 将编辑器选择为...
  • 刚配好autokeras环境准备跑一下mnist简单试一下, 但是在fit的时候就会报错 ``` from keras.datasets import mnist from autokeras import ImageClassifier #下载数据 (x_train, y_train), (x_test, y_test)...
  • autokeras安装教程

    2019-11-08 13:15:59
    详细安装步骤欢迎参考autokeras安装教程(1)下载离线安装包(2)下载解压器(压缩可安装文件要用)(3)然后用7-zip两级解压(1)中的离线包,将setup.py用写字本打开,把配置修改成电脑满足的配置(4) 安装...
  • WIN10系统下在ANACONDA中激活Python及安装Tensorflow和Autokeras的步骤ANACONDA安装Python使用前准备激活PYTHON环境变量安装pip工具安装Tensorflow启用venv(虚拟环境构建模块) 本文主要参考了autokeras官网,有不...
  • WIN10下AutoKeras库使用GPU训练

    千次阅读 2018-12-21 16:15:44
    将C:\Program Files\NVIDIA Corporation\NVSMI加入系统环境变量就可以解决cuda无法正确加载问题。
  • 安装autoKeras的大概过程和问题

    千次阅读 2019-07-07 12:37:23
    2、autokeras只适用于Python3.6,如果不合适就对Python进行降级 conda install python=3.6 3、安装graphviz 此依赖包的目的是为了绘制Auto-Keras生成的网络结构,同样的输入以下命令: pip install graphviz 注意...
  • windows anaconda python 3.7 安装 autokeras

    千次阅读 2019-04-09 21:48:04
    windows anaconda python 3.7 安装 autokeras 注意:autokeras 需要先安装pytorch,pytorch的安装参考这篇文章:windows anaconda python 3.7 安装 pytorch-gpu

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 411
精华内容 164
关键字:

autokeras

友情链接: 单片机与PC通信.zip