精华内容
下载资源
问答
  • Mac 安装使用 MySQL 5.7

    2017-08-10 11:18:50
    Mac安装使用MySQL5.7

    1 下载MySQL

    http://www.mysql.com/downloads/mysql/

    2 启动MySQL

    安装完成后需要到 偏好设置 → Mysql → 启动服务。
    这里写图片描述

    这里写图片描述

    运行如下命令:

    alias mysql=/usr/local/mysql/bin/mysql
    alias mysqladmin=/usr/local/mysql/bin/mysqladmin

    3 查看通知栏中的原始账号密码

    这里写图片描述

    重置密码命令:

    mysqladmin -u root -p password newpass

    4

    安装完mysql 之后,登陆以后,不管运行任何命令,总是提示这个

    ERROR 1820 (HY000): You must reset your password using ALTER USER statement before executing this statement.

    step 1: SET PASSWORD = PASSWORD('your new password');

    step 2: ALTER USER 'root'@'localhost' PASSWORD EXPIRE NEVER;

    step 3: flush privileges;

    完成以上三步退出再登,使用新设置的密码就行了,以上除了your new password的自己修改成新密码外,其他原样输入即可

    展开全文
  • Docker使用Mysql5.7

    2020-11-23 12:01:40
    本文主要内容是docker使用Mysql遇到的问题解决 1.安装Mysql 5.7 镜像 首先获取mysql 5.7 镜像 docker pull mysql:5.7 下载好镜像后查看 docker images 可以看到mysql5.7出现在列表中 运行容器 这边使用-v 也就是...

    本文主要内容是docker使用Mysql遇到的问题解决

    1.安装Mysql 5.7 镜像

    首先获取mysql 5.7 镜像

    docker pull mysql:5.7
    

    下载好镜像后查看

    docker images
    

    可以看到mysql5.7出现在列表中

    运行容器

    这边使用-v 也就是容器卷的方式运行

    docker run -d -p 3310:3306 -v /home/mysql/conf:/etc/mysql/conf.d -v /home/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=123456 --name mysql01 mysql:5.7
    

    -d 表示后台运行
    -p表示端口映射 (:前表示本机端口,:后表示docker内的mysql的端口,到时访问本机3310就可以访问到docker内3306)
    -v表示卷挂载 (:前表示本机路径,:后表示docker内的mysql的路径,使得本地的存储可以跟docker内mysql容器的数据同步,方便更改配置,也可以使假如删除了容器,不至于把数据库的数据也丢失了)
    -e环境配置 这里我配置mysql的密码
    –name表示给这个容器取个名字

    启动成功后,使用

    docker ps
    

    查看,如果有则没问题,然后可以使用本机去连接3310端口可以直接使用docker里的mysql了。

    使用连接工具连接,创建数据库test,我们会发现,在本地的/home/mysql/data下,也同步新增了test。
    在这里插入图片描述

    在这里插入图片描述

    问题

    如果启动没成功 (使用docker ps 查看 看到没启动,而使用docker ps -a 才能看到容器)

    查看日志 docker log 容器名 ,看看改容器的日志

    docker log mysql01 
    
    2020-11-23 03:05:33+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.32-1debian10 started.
    2020-11-23 03:05:33+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
    2020-11-23 03:05:33+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 5.7.32-1debian10 started.
    2020-11-23 03:05:33+00:00 [Note] [Entrypoint]: Initializing database files
    2020-11-23T03:05:33.367092Z 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
    2020-11-23T03:05:33.368295Z 0 [ERROR] --initialize specified but the data directory has files in it. Aborting.
    2020-11-23T03:05:33.368318Z 0 [ERROR] Aborting
    

    假如出现这个,说明映射的文件夹不是空的,请清空后删除容器后再启动

    docker rm 容器id
    
    如果启动成功,但是发现挂载的文件夹(data)里没有文件,是空的

    因为映射错了docker内mysql的文件,请查看是否输入错误,根据自己的mysql目录对应,一般是 /var/lib/mysql,也有是 /var/mysql/data,进入容器内查看

    docker exec -it 容器Id /bin/bash
    
    展开全文
  • 使用MySQL 5.7虚拟列提高查询效率

    千次阅读 2018-02-25 00:00:00
    导读翻译团队:星耀队@知数堂团队成员:星耀队-芬达,星耀队-顺子,星耀队-M哥原文出处:...我们将看看如何使用MySQL 5.7的虚拟
        

    导读

    翻译团队:星耀队@知数堂

    团队成员:星耀队-芬达,星耀队-顺子,星耀队-M哥

    原文出处:https://www.percona.com/blog/2018/01/29/using-generated-columns-in-mysql-5-7-to-increase-query-performance/

    原文作者:Alexander Rubin 

    在这篇博客中,我们将看看如何使用MySQL 5.7的虚拟列来提高查询性能。

    In this blog post, we’ll look at ways you can use MySQL 5.7 generated columns (or virtual columns) to improve query performance.

    说明

    大约两年前,我发表了一个在MySQL5.7版本上关于虚拟列的文章。从那时开始,它成为MySQL5.7发行版当中,我最喜欢的一个功能点。原因很简单:在虚拟列的帮助下,我们可以创建间接索引(fine-grained indexes),可以显著提高查询性能。我要告诉你一些技巧,可以潜在地解决那些使用了GROUP BY 和 ORDER BY而慢的报表查询。

    About two years ago I published a blog post about Generated (Virtual) Columns in MySQL 5.7. Since then, it’s been one of my favorite features in the MySQL 5.7 release. The reason is simple: with the help of virtual columns, we can create fine-grained indexes that can significantly increase query performance. I’m going to show you some tricks that can potentially fix slow reporting queries with GROUP BY and ORDER BY.

    问题

    最近我正在协助一位客户,他正挣扎于这个查询上:

    Recently I was working with a customer who was struggling with this query:

    SELECT 
    CONCAT(verb, ' - ', replace(url,'.xml','')) AS 'API Call', 
    COUNT(*) as 'No. of API Calls', 
    AVG(ExecutionTime) as 'Avg. Execution Time', 
    COUNT(distinct AccountId) as 'No. Of Accounts', 
    COUNT(distinct ParentAccountId) as 'No. Of Parents' 
    FROM ApiLog 
    WHERE ts between '2017-10-01 00:00:00' and '2017-12-31 23:59:59' 
    GROUP BY CONCAT(verb, ' - ', replace(url,'.xml','')) 
    HAVING COUNT(*) >= 1 ;

    这个查询运行了一个多小时,并且使用和撑满了整个 tmp目录(需要用到临时文件完成排序)。

    The query was running for more than an hour and used all space in the tmp directory (with sort files).

    表结构如下:

    The table looked like this:

    CREATE TABLE `ApiLog` (
    `Id` int(11) NOT NULL AUTO_INCREMENT,
    `ts` timestamp DEFAULT CURRENT_TIMESTAMP,
    `ServerName` varchar(50)  NOT NULL default '',
    `ServerIP` varchar(50)  NOT NULL default '',
    `ClientIP` varchar(50)  NOT NULL default '',
    `ExecutionTime` int(11) NOT NULL default 0,
    `URL` varchar(3000)  NOT NULL COLLATE utf8mb4_unicode_ci NOT NULL,
    `Verb` varchar(16)  NOT NULL,
    `AccountId` int(11) NOT NULL,
    `ParentAccountId` int(11) NOT NULL,
    `QueryString` varchar(3000) NOT NULL,
    `Request` text NOT NULL,
    `RequestHeaders` varchar(2000) NOT NULL,
    `Response` text NOT NULL,
    `ResponseHeaders` varchar(2000) NOT NULL,
    `ResponseCode` varchar(4000) NOT NULL,
    ... // other fields removed for simplicity
    PRIMARY KEY (`Id`),
    KEY `index_timestamp` (`ts`),
    ... // other indexes removed for simplicity
    ) ENGINE=InnoDB;

    我们发现查询没有使用时间戳字段(“TS”)的索引:

    We found out the query was not using an index on the timestamp field (“ts”):

    mysql> explain SELECT CONCAT(verb, ' - ', replace(url,'.xml','')) AS 'API Call', COUNT(*)  as 'No. of API Calls',  avg(ExecutionTime) as 'Avg. Execution Time', count(distinct AccountId) as 'No. Of Accounts',  count(distinct ParentAccountId) as 'No. Of Parents'  FROM ApiLog  WHERE ts between '2017-10-01 00:00:00' and '2017-12-31 23:59:59'  GROUP BY CONCAT(verb, ' - ', replace(url,'.xml',''))  HAVING COUNT(*)  >= 1G
    *************************** 1. row ***************************
               id: 1
      select_type: SIMPLE
            table: ApiLog
       partitions: NULL
             type: ALL
    possible_keys: ts
              key: NULL
          key_len: NULL
              ref: NULL
             rows: 22255292
         filtered: 50.00
            Extra: Using where; Using filesort1 row in set, 1 warning (0.00 sec)

    原因很简单:符合过滤条件的行数太大了,以至于影响一次索引扫描扫描的效率(或者至少优化器是这样认为的):

    The reason for that is simple: the number of rows matching the filter condition was too large for an index scan to be efficient (or at least the optimizer thinks that):

    mysql> select count(*) from ApiLog WHERE ts between '2017-10-01 00:00:00' and '2017-12-31 23:59:59' ;
    +----------+
    | count(*) |
    +----------+
    |  7948800 |
    +----------+
    1 row in set (2.68 sec)

    总行数:21998514。查询需要扫描的总行数的36%(7948800/21998514)(译者按:当预估扫描行数超过20% ~ 30%时,即便有索引,优化器通常也会强制转成全表扫描)。

    Total number of rows: 21998514. The query needs to scan 36% of the total rows (7948800 / 21998514).

    在这种情况下,我们有许多处理方法:

    1. 创建时间戳列和GROUP BY列的联合索引;

    2. 创建一个覆盖索引(包含所有查询字段);

    3. 仅对GROUP BY列创建索引;

    4. 创建索引松散索引扫描。

    In this case, we have a number of approaches:

    1. Create a combined index on timestamp column + group by fields

    2. Create a covered index (including fields that are selected)

    3. Create an index on just GROUP BY fields

    4. Create an index for loose index scan

    然而,如果我们仔细观察查询中“GROUP BY”部分,我们很快就意识到,这些方案都不能解决问题。以下是我们的GROUP BY部分:

    However, if we look closer at the “GROUP BY” part of the query, we quickly realize that none of those solutions will work. Here is our GROUP BY part:

    GROUP BY CONCAT(verb, ' - ', replace(url,'.xml',''))

    这里有两个问题:

    1. 它是计算列,所以MySQL不能扫描verb + url的索引。它首先需要连接两个字段,然后组成连接字符串。这就意味着用不到索引;

    2. URL被定义为“varchar(3000) COLLATE utf8mb4_unicode_ci NOT NULL”,不能被完全索引(即使在全innodb_large_prefix= 1 参数设置下,这是UTF8启用下的默认参数)。我们能做部分索引,这对GROUP BYsql优化并没有什么帮助。

    There are two problems here:

    1. It is using a calculating field, so MySQL can’t just scan the index on verb + url. It needs to first concat two fields, and then group on the concatenated string. That means that the index won’t be used.

    2. The URL is declared as “varchar(3000) COLLATE utf8mb4_unicode_ci NOT NULL” and can’t be indexed in full (even with innodb_large_prefix=1 option, which is the default as we have utf8 enabled). We can only do a partial index, which won’t be helpful for GROUP BY optimization.

    在这里,我尝试去对URL列添加一个完整的索引,在innodb_large_prefix=1参数下:

    Here, I’m trying to add a full index on the URL with innodb_large_prefix=1:

    mysql> alter table ApiLog add key verb_url(verb, url);
    ERROR 1071 (42000): Specified key was too long; max key length is 3072 bytes

    嗯,通过修改“GROUP BY CONCAT(verb, ‘ – ‘, replace(url,’.xml’,”))”“GROUP BY verb, url” 会帮助(假设我们把字段定义从 varchar(3000)调小一些,不管业务上允许或不允许)。然而,这将改变结果,因URL字段不会删除 .xml扩展名了。

    Well, changing the “GROUP BY CONCAT(verb, ‘ – ‘, replace(url,’.xml’,”))” to “GROUP BY verb, url” could help (assuming that we somehow trim the field definition from varchar(3000) to something smaller, which may or may not be possible). However, it will change the results as it will not remove the .xml extension from the URL field.

    解决方案

    好消息是,在MySQL 5.7中我们有虚拟列。所以我们可以在“CONCAT(verb, ‘ – ‘, replace(url,’.xml’,”))”之上创建一个虚拟列。最好的部分:我们不需要执行一组完整的字符串(可能大于3000字节)。我们可以使用MD5哈希(或更长的哈希,例如SHA1 / SHA2)作为GROUP BY的对象。

    The good news is that in MySQL 5.7 we have virtual columns. So we can create a virtual column on top of “CONCAT(verb, ‘ – ‘, replace(url,’.xml’,”))”. The best part: we do not have to perform a GROUP BY with the full string (potentially > 3000 bytes). We can use an MD5 hash (or longer hashes, i.e., sha1/sha2) for the purposes of the GROUP BY.

    下面是解决方案:

    Here is the solution:

    alter table ApiLog add verb_url_hash varbinary(16) GENERATED ALWAYS AS (unhex(md5(CONCAT(verb, ' - ', replace(url,'.xml',''))))) VIRTUAL;
    alter table ApiLog add key (verb_url_hash);

    所以我们在这里做的是:

    1. 声明虚拟列,类型为varbinary(16);

    2. 在CONCAT(verb, ‘ – ‘, replace(url,’.xml’,”)上创建虚拟列,并且使用MD5哈希转化后再使用unhex转化32位十六进制为16位二进制;

    3. 对上面的虚拟列创建索引。

    So what we did here is:

    1. Declared the virtual column with type varbinary(16)

    2. Created a virtual column on CONCAT(verb, ‘ – ‘, replace(url,’.xml’,”), and used an MD5 hash on top plus an unhex to convert 32 hex bytes to 16 binary bytes

    3. Created and index on top of the virtual column

    现在我们可以修改查询语句,GROUP BY verb_url_hash列:

    Now we can change the query and GROUP BY verb_url_hash column:

    mysql> explain SELECT CONCAT(verb, ' - ', replace(url,'.xml',''))
    AS 'API Call', COUNT(*)  as 'No. of API Calls',
    avg(ExecutionTime) as 'Avg. Execution Time',
    count(distinct AccountId) as 'No. Of Accounts',
    count(distinct ParentAccountId) as 'No. Of Parents'
    FROM ApiLog
    WHERE ts between '2017-10-01 00:00:00' and '2017-12-31 23:59:59'
    GROUP BY verb_url_hash
    HAVING COUNT(*)  >= 1;
    ERROR 1055 (42000): Expression #1 of SELECT list is not in
    GROUP BY clause and contains nonaggregated column 'ApiLog.ApiLog.Verb'
    which is not functionally dependent on columns in GROUP BY clause;
    this is incompatible with sql_mode=only_full_group_by

    MySQL 5.7的严格模式是默认启用的,我们可以只针对这次查询修改一下。

    现在解释计划看上去好多了:

    MySQL 5.7 has a strict mode enabled by default, which we can change for that query only.

    Now the explain plan looks much better:

    mysql> select @@sql_mode;
    +-------------------------------------------------------------------------------------------------------------------------------------------+
    | @@sql_mode                                                                                                                                
    |+-------------------------------------------------------------------------------------------------------------------------------------------+
    | ONLY_FULL_GROUP_BY,STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION |
    +-------------------------------------------------------------------------------------------------------------------------------------------+
    1 row in set (0.00 sec)
    mysql> set sql_mode='STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION';
    Query OK, 0 rows affected (0.00 sec)
    mysql> explain SELECT CONCAT(verb, ' - ', replace(url,'.xml','')) AS 'API Call', COUNT(*)  as 'No. of API Calls',  avg(ExecutionTime) as 'Avg. Execution Time', count(distinct AccountId) as 'No. Of Accounts',  count(distinct ParentAccountId) as 'No. Of Parents'  FROM ApiLog  WHERE ts between '2017-10-01 00:00:00' and '2017-12-31 23:59:59'  GROUP BY verb_url_hash HAVING COUNT(*)  >= 1G
    *************************** 1. row ***************************
               id: 1
      select_type: SIMPLE
            table: ApiLog
       partitions: NULL
             type: index
    possible_keys: ts,verb_url_hash
              key: verb_url_hash
          key_len: 19
              ref: NULL
             rows: 22008891
         filtered: 50.00
            Extra: Using where1 row in set, 1 warning (0.00 sec)

    MySQL可以避免排序,速度更快。它将最终还是要扫描所有表的索引的顺序。响应时间明显更好:只需大概38秒而不再是大于一小时。

    MySQL will avoid any sorting, which is much faster. It will still have to eventually scan all the table in the order of the index. The response time is significantly better: ~38 seconds as opposed to > an hour.

    覆盖索引

    现在我们可以尝试做一个覆盖索引,这将相当大:

    Now we can attempt to do a covered index, which will be quite large:

    mysql> alter table ApiLog add key covered_index (`verb_url_hash`,`ts`,`ExecutionTime`,`AccountId`,`ParentAccountId`, verb, url);
    Query OK, 0 rows affected (1 min 29.71 sec)
    Records: 0  Duplicates: 0  Warnings: 0

    我们添加了一个“verb”“URL”,所以之前我不得不删除表定义的COLLATE utf8mb4_unicode_ci。现在执行计划表明,我们使用了覆盖索引:

    We had to add a “verb” and “url”, so beforehand I had to remove the COLLATE utf8mb4_unicode_ci from the table definition. Now explain shows that we’re using the index:

    mysql> explain SELECT  CONCAT(verb, ' - ', replace(url,'.xml','')) AS 'API Call',  COUNT(*) as 'No. of API Calls',  AVG(ExecutionTime) as 'Avg. Execution Time',  COUNT(distinct AccountId) as 'No. Of Accounts',  COUNT(distinct ParentAccountId) as 'No. Of Parents'  FROM ApiLog  WHERE ts between '2017-10-01 00:00:00' and '2017-12-31 23:59:59'  GROUP BY verb_url_hash  HAVING COUNT(*) >= 1G
    *************************** 1. row ***************************
               id: 1
      select_type: SIMPLE
            table: ApiLog
       partitions: NULL
             type: index
    possible_keys: ts,verb_url_hash,covered_index
              key: covered_index
          key_len: 3057
              ref: NULL
             rows: 22382136
         filtered: 50.00
            Extra: Using where; Using index
    1 row in set, 1 warning (0.00 sec)

    响应时间下降到约12秒!但是,索引的大小明显地比仅verb_url_hash的索引(每个记录16字节)要大得多。

    The response time dropped to ~12 seconds! However, the index size is significantly larger compared to just verb_url_hash (16 bytes per record).

    结论

    MySQL 5.7的生成列提供一个有价值的方法来提高查询性能。如果你有一个有趣的案例,请在评论中分享。

    MySQL 5.7 generated columns provide a valuable way to improve query performance. If you have an interesting case, please share in the comments.


    640.gif?

    扫码加入知数堂技术交流QQ群

    (群号:579036588)

    群内可@各位助教了解更多课程信息

    640.png?



    640?wx_fmt=png640?wx_fmt=jpeg640?wx_fmt=jpeg640?wx_fmt=png


    知数堂

    叶金荣与吴炳锡联合打造

    领跑IT精英培训

    行业资深专家强强联合,倾心定制

    MySQL实战/MySQL优化 / Python/ SQL优化

    数门精品课程

    紧随技术发展趋势,定期优化培训教案

    融入大量生产案例,贴合企业一线需求

    社群陪伴学习,一次报名,可学3期

    DBA、开发工程师必修课

    上千位学员已华丽转身,薪资翻番,职位提升

    改变已悄然发生,你还在等什么?

    640.png?

    扫码下载知数堂精品课程试听视频

    或点击“阅读原文”直达下载地址

    (MySQL 实战/优化、Python开发,及SQL优化等课程)

    密码:hg3h


    640?wx_fmt=png

    640.png?640.png?
    展开全文
  • 本文介绍银河麒麟桌面操作系统V10上通过麒麟软件商店安装MySQL5.7、MySQL Workbench及其基本使用方法 一、安装MySQL5.7 点击UK图标、所有程序,找到麒麟软件商店 输入mysql 点击MySQL服务器下面的安装按钮...

    前言

    本文介绍银河麒麟桌面操作系统V10上通过麒麟软件商店安装MySQL5.7、MySQL Workbench及其基本使用方法


     

    一、安装MySQL5.7

    点击UK图标、所有程序,找到麒麟软件商店

    输入mysql

    点击MySQL服务器下面的安装按钮

    输入用户密码,点击授权

    等待安装完成

     

    二、安装MySQL Workbench

    参考上面流程,继续安装workbench,点击MySQL工作台下面的安装按钮

     

    输入密码,授权

    等待安装完成

    三、使用MySQL Workbench管理MySQL实例

    启动mysql服务

    输入以下指令,启动mysqld服务进程

    sudo systemctl start mysql

    输入以下指令,查看服务状态

    sudo systemctl status mysql

    检查端口监听状态

    sudo netstat -lnetp | grep 3306

    修改监听地址

    注意mysqld监听的地址是127.0.0.1,将其修改成0.0.0.0,操作方法如下:

    vim打开配置文件

    sudo vim /etc/mysql/mysql.conf.d/mysql.conf

    采用方向键,移动光标到箭头位置

    输入指令i

    进入编辑模式,将127.0.0.1改为0.0.0.0,按下Esc进入指令模式,输入指令:wq 保存退出

    检查连接监听

    设置root密码

    mysql默认安装后,root是采用socket方式登陆,借助mysql字符界客户端授权root远程登陆

    说明:auth_socket插件。该插件不关心,也不需要密码。它只检查用户是否使用UNIX套接字进行连接,然后比较用户名。注意采用sudo 切换到root权限

    为了检点,直接删掉原来的root账户信息,创建一个新的

    # sudo mysql
    [sudo] yeqiang 的密码:
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 15
    Server version: 5.7.27-0kord0.16.04.1k2 (Kylin)
    
    Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
    
    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    mysql> delete from mysql.user where user='root';
    Query OK, 2 rows affected (0.00 sec)
    
    mysql> grant all privileges on *.* to 'root'@'%' identified by 'rootpwd';
    Query OK, 0 rows affected, 1 warning (0.00 sec)
    
    mysql> flush privileges;
    Query OK, 0 rows affected (0.00 sec)
    
    mysql> select * from mysql.user where user='root';
    +------+------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+-----------------------+-------------------------------------------+------------------+-----------------------+-------------------+----------------+
    | Host | User | Select_priv | Insert_priv | Update_priv | Delete_priv | Create_priv | Drop_priv | Reload_priv | Shutdown_priv | Process_priv | File_priv | Grant_priv | References_priv | Index_priv | Alter_priv | Show_db_priv | Super_priv | Create_tmp_table_priv | Lock_tables_priv | Execute_priv | Repl_slave_priv | Repl_client_priv | Create_view_priv | Show_view_priv | Create_routine_priv | Alter_routine_priv | Create_user_priv | Event_priv | Trigger_priv | Create_tablespace_priv | ssl_type | ssl_cipher | x509_issuer | x509_subject | max_questions | max_updates | max_connections | max_user_connections | plugin                | authentication_string                     | password_expired | password_last_changed | password_lifetime | account_locked |
    +------+------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+-----------------------+-------------------------------------------+------------------+-----------------------+-------------------+----------------+
    | %    | root | Y           | Y           | Y           | Y           | Y           | Y         | Y           | Y             | Y            | Y         | N          | Y               | Y          | Y          | Y            | Y          | Y                     | Y                | Y            | Y               | Y                | Y                | Y              | Y                   | Y                  | Y                | Y          | Y            | Y                      |          |            |             |              |             0 |           0 |               0 |                    0 | mysql_native_password | *A7663C386E0231DEB41859368A584CDF1D355C29 | N                | 2020-12-08 16:39:19   |              NULL | N              |
    +------+------+-------------+-------------+-------------+-------------+-------------+-----------+-------------+---------------+--------------+-----------+------------+-----------------+------------+------------+--------------+------------+-----------------------+------------------+--------------+-----------------+------------------+------------------+----------------+---------------------+--------------------+------------------+------------+--------------+------------------------+----------+------------+-------------+--------------+---------------+-------------+-----------------+----------------------+-----------------------+-------------------------------------------+------------------+-----------------------+-------------------+----------------+
    1 row in set (0.00 sec)
    
    mysql>

     

    启动workbench

    点击UK图标、所有程序、MySQL Workbench

    本地连接自动出现了

    右键,选择Edit Connection

    点击Store in Keychain ...

    输入密码,点击OK

    测试连接

    查看服务器状态

     


    总结

    借助麒麟应用商店,可以方便安装mysql及workbench。

    注意:配置root账号是,原账号做了一定限制,为了简单起见,可参考本文先删除root账号信息,再通过grant创建。否则容易出现access denied 错误。

    展开全文
  • Pomelo MySQL For Entity Framework Core 是MVP 郑逸笙的作品,各项特性支持神速,本文讲在.NET Core中使用MySQL5.7的JSON类型字段。
  • 主要是说明Linux(Ubuntu16.04)+MySQL Community Server 5.7.17的安装,为了保证MySQL是自己想要的版本,使用MySQL5.7deb安装包进行安装。其次文中还有Ubuntu16.04使用国内的Linux源地址相关的内容。
  • Win10使用MySQL5.7

    千次阅读 2018-03-14 16:49:13
    不注册账号也可以下载,点击下图红色部分下载之后的文件名是:mysql-installer-community-5.7.21.0.msi2.安装过程中需要修改的步骤(其余步骤默认)选择自定义安装点击下图的绿色箭头,将当前Product移动需要安装的...
  • 使用mysql5.7 utf8mb4 utf8mb4_general_ci 爬虫的时候就支持表情包了 mysql8.0差别有点大 将python_spider 完整的复制到python_spider_copy上面 1.右键选中python_spider 选择数据传输 默认传输所有表,你可以进行...
  • 学习使用MySQL 5.7的sys库

    千次阅读 2017-06-18 22:00:35
    Sys库里的数据来源 Sys库所有的数据源来自:performance_schema。目标是把performance_schema的把复杂度降低,让DBA能更好的阅读这个库里的内容。让DBA更快的了解DB的运行情况。   Sys库下有两种表 ...
  • [导读] 本文提供MySql5 7实现每秒50W查询一文的细节以及基准测试结果,解释了我早期在Mysql Connect ...本文提供 MySql5.7实现每秒50W查询 一文的细节以及基准测试结果,解释了我早期在Mysql Connect 发表的谈话。
  • 在官网中下载免安装版:https://dev.mysql.com/downloads/mysql/...# 设置mysql客户端默认字符集 default-character-set=utf8 [mysqld] #设置3306端口 port = 3306 # 设置mysql的安装目录 basedir=D:\workspan\mysql...
  • 导入数据的时候提示报错 Invalid default value for 'CREATE_TIME’ 然后查了一下是因为数据库表的字段create_datedatetime DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',也就是 CURRENT_TIMESTAMP 的问题 ...
  • Golang连接使用MySql5.7数据库完整步骤

    千次阅读 2018-09-11 11:50:52
    service mysql start 以下是在MySql中执行的操作。 为防止修改系统的数据库,我们需要新建一个新的数据库: CREATE DATABASE test_db; 切换到新建的数据库: use test_db; 在该数据库中新建一个表单,其中...
  • *本教程将介绍如何在CentOS 7服务器上安装MySQL版本5.7 *CentOS 7已经使用MariaDB替代了MySQL,如直接输入“yum install mysql-server”则安装 MariaDB 开始之前:   1. 更新系统(更新后建议重启机器): ...

空空如也

空空如也

1 2 3 4 5 ... 20
收藏数 9,156
精华内容 3,662
关键字:

使用mysql5.7

mysql 订阅