邢栋博客
邢栋博客,Action博客,记录工作和生活中的点点滴滴
如何查询MySQL数据库所占用的空间大小
标签:
mysql
====进入终端=====
mysql -uroot -p
show databases;
use information_schema;
====1.查看数据库总大小====
select concat(round(sum(DATA_LENGTH/1024/1024),2),'MB') as data from tables;
====2.查询某个数据库的大小====
select concat(round(sum(DATA_LENGTH/1024/1024),2),'MB') as data from tables where TABLE_SCHEMA='wordpress';
====3.查询某个数据库下某个表的大小====
select concat(round(DATA_LENGTH/1024/1024,2),'MB') as data from tables where TABLE_SCHEMA='wordpress' and TABLE_NAME='wp_options';
mysql -uroot -p
show databases;
use information_schema;
====1.查看数据库总大小====
select concat(round(sum(DATA_LENGTH/1024/1024),2),'MB') as data from tables;
====2.查询某个数据库的大小====
select concat(round(sum(DATA_LENGTH/1024/1024),2),'MB') as data from tables where TABLE_SCHEMA='wordpress';
====3.查询某个数据库下某个表的大小====
select concat(round(DATA_LENGTH/1024/1024,2),'MB') as data from tables where TABLE_SCHEMA='wordpress' and TABLE_NAME='wp_options';
sql日期截取以及数量累加
创建表+插入测试数据
CREATE TABLE `post` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` varchar(30) NOT NULL, `status` tinyint(1) NOT NULL, `cdate` datetime NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=5 DEFAULT CHARSET=utf8; insert into post (title,status,cdate)values('测试1',1,'2019-08-01 10:10:10'); insert into post (title,status,cdate)values('测试1',1,'2019-08-02 11:10:20'); insert into post (title,status,cdate)values('测试1',1,'2019-08-01 12:10:20'); insert into post (title,status,cdate)values('测试1',1,'2019-08-03 13:10:20'); insert into post (title,status,cdate)values('测试1',1,'2019-08-02 14:10:20'); insert into post (title,status,cdate)values('测试1',1,'2019-08-03 15:10:20'); insert into post (title,status,cdate)values('测试1',1,'2019-08-03 16:10:20');
sql练习
获取某个日期的2019-08-03 select * from post where left(cdate,10) ='2019-08-03'; //截取 select * from post where substring(cdate,1,10) ='2019-08-03';//截取 select * from post where DATE_FORMAT(cdate, '%Y-%m-%d')='2019-08-03';//格式化时间 //按照日期分组 select count(*),substring(cdate,1,10) from post group by substring(cdate,1,10) order by count(*) desc; //按照日期分组,优化下 select count(*) as count,substring(cdate,1,10) as cdate from post group by substring(cdate,1,10) order by count desc; //累计之间的数量 set @tmptotal := 0;select cdate,(@tmptotal :=@tmptotal+count) as total from (select count(*) as count,substring(cdate,1,10) as cdate from post group by substring(cdate,1,10) order by cdate) as tmpdatas group by cdate; //累计之间的数量,大于10的日期 set @tmptotal := 0;select cdate,(@tmptotal :=@tmptotal+count) as total from (select count(*) as count,substring(cdate,1,10) as cdate from post group by substring(cdate,1,10) order by cdate) as tmpdatas group by cdate having total > 10;
mysql锁简记
锁分类
1.读锁(共享锁,读操作不受影响,别的会话有插入操作会处于阻塞状态) 写锁(排他锁,在写操作完成之前,会阻断其他读和写操作)
2.表锁和行级锁
======1.表锁 偏读========
myisam存储引擎,开销小,加锁快,锁力度大,发生锁冲突的概率最高,并发度最低
//手动加锁
lock table 表名称 read(write),表名称2 read(write),其他;
//查看表上加过的锁
show open tables;
//删除表锁
unlock tables;
//分析表锁定
show status like 'table%';
Variable_name | Value |
+----------------------------+-------+
| Table_locks_immediate | 112 |//产生表级锁定的次数,表示可以立即获取锁的查询次数,每立即获取锁值加1
| Table_locks_waited | 0 | //出现表级锁定争用而发生等待的次数(不能立即获取锁的次数,每等待一次锁值加1),此值高则说明存在着较严重的表级锁争用情况。
| Table_open_cache_hits | 0 |
| Table_open_cache_misses | 6 |//在打开表时,表缓存不存在的次数。
| Table_open_cache_overflows | 0 |
======2.行锁 偏写========
行锁偏向InnoDB存储引擎,开销大,加锁慢,会出现死锁,锁定粒度最小,发生锁冲突的概率最低,并发度也最高。
InnoDB与MYISAM的最大不同有两点:一是支持事务(TRANSACTION);二是采用了行级锁。
事务特性:原子性Atomicity,一致性Consistent,隔离性lsolation,持久性Durable
1.set autocommit=0;update ......;commit;
2.间隙锁
当我们用范围条件而不是相等条件检索数据,并请求共享或排他锁,Innodb会给符合条件的已有数据记录的索引加锁;对于键值在条件范围内但不存在的记录,叫做间隙
(GAP);Innodb也会对这个间隙加锁,
危害:因为query执行过程中通过范围查找的话,他会锁定整个范围所有的索引值,即使这个键值并不存在。
行锁分析:
show status like 'innodb_row_lock%';
+-------------------------------+-------+
| Variable_name | Value |
+-------------------------------+-------+
| Innodb_row_lock_current_waits | 0 |当前正在等待锁定的数量
| Innodb_row_lock_time | 0 |从系统启动到现在锁定总时间长度 ***
| Innodb_row_lock_time_avg | 0 |每次等待所花平均时间 ***
| Innodb_row_lock_time_max | 0 |从系统启动到现在等待最长的一次所花时间
| Innodb_row_lock_waits | 0 |系统启动后到现在总共等待的次数 ***
优化建议
1.尽可能让所有数据检索都通过索引来完成,避免无索引行锁升级为表锁
2.合理设计索引,尽量缩小锁的范围
3.尽可能减少检索条件,避免间隙锁
4.尽量控制事务大小,减少锁定资源量和时间长度
5.s尽可能低级别事务隔离
======3.页面锁 偏写========
开销和加锁时间界于表锁和行锁之间;会出现死锁;锁定粒度界于表锁和行锁之间,并发度一般
==================================
共享锁(S):SELECT * FROM table_name WHERE ... LOCK IN SHARE MODE。
排他锁(X):SELECT * FROM table_name WHERE ... FOR UPDATE。
InnoDB这种行锁实现特点:只有通过索引条件检索数据,InnoDB才使用行级锁,否则,InnoDB将使用表锁!
1.读锁(共享锁,读操作不受影响,别的会话有插入操作会处于阻塞状态) 写锁(排他锁,在写操作完成之前,会阻断其他读和写操作)
2.表锁和行级锁
======1.表锁 偏读========
myisam存储引擎,开销小,加锁快,锁力度大,发生锁冲突的概率最高,并发度最低
//手动加锁
lock table 表名称 read(write),表名称2 read(write),其他;
//查看表上加过的锁
show open tables;
//删除表锁
unlock tables;
//分析表锁定
show status like 'table%';
Variable_name | Value |
+----------------------------+-------+
| Table_locks_immediate | 112 |//产生表级锁定的次数,表示可以立即获取锁的查询次数,每立即获取锁值加1
| Table_locks_waited | 0 | //出现表级锁定争用而发生等待的次数(不能立即获取锁的次数,每等待一次锁值加1),此值高则说明存在着较严重的表级锁争用情况。
| Table_open_cache_hits | 0 |
| Table_open_cache_misses | 6 |//在打开表时,表缓存不存在的次数。
| Table_open_cache_overflows | 0 |
======2.行锁 偏写========
行锁偏向InnoDB存储引擎,开销大,加锁慢,会出现死锁,锁定粒度最小,发生锁冲突的概率最低,并发度也最高。
InnoDB与MYISAM的最大不同有两点:一是支持事务(TRANSACTION);二是采用了行级锁。
事务特性:原子性Atomicity,一致性Consistent,隔离性lsolation,持久性Durable
1.set autocommit=0;update ......;commit;
2.间隙锁
当我们用范围条件而不是相等条件检索数据,并请求共享或排他锁,Innodb会给符合条件的已有数据记录的索引加锁;对于键值在条件范围内但不存在的记录,叫做间隙
(GAP);Innodb也会对这个间隙加锁,
危害:因为query执行过程中通过范围查找的话,他会锁定整个范围所有的索引值,即使这个键值并不存在。
行锁分析:
show status like 'innodb_row_lock%';
+-------------------------------+-------+
| Variable_name | Value |
+-------------------------------+-------+
| Innodb_row_lock_current_waits | 0 |当前正在等待锁定的数量
| Innodb_row_lock_time | 0 |从系统启动到现在锁定总时间长度 ***
| Innodb_row_lock_time_avg | 0 |每次等待所花平均时间 ***
| Innodb_row_lock_time_max | 0 |从系统启动到现在等待最长的一次所花时间
| Innodb_row_lock_waits | 0 |系统启动后到现在总共等待的次数 ***
优化建议
1.尽可能让所有数据检索都通过索引来完成,避免无索引行锁升级为表锁
2.合理设计索引,尽量缩小锁的范围
3.尽可能减少检索条件,避免间隙锁
4.尽量控制事务大小,减少锁定资源量和时间长度
5.s尽可能低级别事务隔离
======3.页面锁 偏写========
开销和加锁时间界于表锁和行锁之间;会出现死锁;锁定粒度界于表锁和行锁之间,并发度一般
==================================
共享锁(S):SELECT * FROM table_name WHERE ... LOCK IN SHARE MODE。
排他锁(X):SELECT * FROM table_name WHERE ... FOR UPDATE。
InnoDB这种行锁实现特点:只有通过索引条件检索数据,InnoDB才使用行级锁,否则,InnoDB将使用表锁!
springboot2.X集成es5.5
==========pom.xml==============
1.修改内容
<properties>
<java.version>1.8</java.version>
<elasticsearch.version>5.5.2</elasticsearch.version>
</properties>
2.修改内容
<!-- ES elasticsearch -->
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.11.0</version>
</dependency>
1.修改内容
<properties>
<java.version>1.8</java.version>
<elasticsearch.version>5.5.2</elasticsearch.version>
</properties>
2.修改内容
<!-- ES elasticsearch -->
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>transport</artifactId>
<version>${elasticsearch.version}</version>
</dependency>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-core</artifactId>
<version>2.11.0</version>
</dependency>
=======增加MyEsConfig.java=============
package com.action.firstappdemo.config; import org.elasticsearch.client.transport.TransportClient; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.InetSocketTransportAddress; import org.elasticsearch.transport.client.PreBuiltTransportClient; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import java.net.InetAddress; import java.net.UnknownHostException; /** * @Author Action * @Description: * @Date Create in 2019/5/5 15:21 */ @Configuration public class MyEsConfig { @Bean public TransportClient client() throws UnknownHostException { InetSocketTransportAddress node = new InetSocketTransportAddress( InetAddress.getByName("localhost"), 9300 ); InetSocketTransportAddress node1 = new InetSocketTransportAddress( InetAddress.getByName("localhost"), 9301 ); InetSocketTransportAddress node2 = new InetSocketTransportAddress( InetAddress.getByName("localhost"), 9302 ); Settings settings = Settings.builder() .put("cluster.name", "action") .build(); TransportClient client = new PreBuiltTransportClient(settings); client.addTransportAddress(node); client.addTransportAddress(node1); client.addTransportAddress(node2); return client; } }
======EsController.java 增删改查===============
package com.action.firstappdemo.controller; import com.imooc.firstappdemo.common.PageBean; import com.imooc.firstappdemo.entity.DataAppDpi; import com.imooc.firstappdemo.service.IconService; import lombok.extern.slf4j.Slf4j; import org.elasticsearch.action.delete.DeleteResponse; import org.elasticsearch.action.get.GetResponse; import org.elasticsearch.action.index.IndexResponse; import org.elasticsearch.action.search.SearchRequestBuilder; import org.elasticsearch.action.search.SearchResponse; import org.elasticsearch.action.search.SearchType; import org.elasticsearch.action.update.UpdateRequest; import org.elasticsearch.action.update.UpdateResponse; import org.elasticsearch.client.transport.TransportClient; import org.elasticsearch.common.xcontent.XContentBuilder; import org.elasticsearch.common.xcontent.XContentFactory; import org.elasticsearch.index.query.BoolQueryBuilder; import org.elasticsearch.index.query.QueryBuilders; import org.elasticsearch.index.query.RangeQueryBuilder; import org.elasticsearch.search.SearchHit; import org.n3r.idworker.Sid; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.beans.factory.annotation.Value; import org.springframework.format.annotation.DateTimeFormat; import org.springframework.http.HttpStatus; import org.springframework.http.ResponseEntity; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.ResponseBody; import org.springframework.web.bind.annotation.RestController; import java.io.IOException; import java.util.ArrayList; import java.util.Date; import java.util.List; import java.util.Map; import java.util.concurrent.ExecutionException; /** * mvn spring-boot:run */ @Slf4j @Controller @RequestMapping("es") public class EsController { @Autowired private TransportClient client; @RequestMapping("/get/people/man") @ResponseBody public ResponseEntity get(@RequestParam(name = "id", defaultValue = "") String id){ if(id.isEmpty()){ return new ResponseEntity(HttpStatus.NOT_FOUND); } //prepareGet预执行 GetResponse result = client.prepareGet("people", "man", id).get(); if(! result.isExists()){ return new ResponseEntity(HttpStatus.NOT_FOUND); } return new ResponseEntity(result.getSource(), HttpStatus.OK); } @RequestMapping("/add/people/man") @ResponseBody public ResponseEntity add(@RequestParam(name = "name") String name, @RequestParam(name = "age") int age, @RequestParam(name = "date") @DateTimeFormat(pattern = "yyyy-MM-dd HH:mm:ss") Date date, @RequestParam(name = "country") String country){ try { XContentBuilder content = XContentFactory.jsonBuilder() .startObject() .field("name", name) .field("age", age) .field("date", date.getTime()) .field("country", country) .endObject(); // prepareIndex构建索引 IndexResponse result = client.prepareIndex("people", "man") .setSource(content) .get(); return new ResponseEntity(result.getId(), HttpStatus.OK); } catch (IOException e) { e.printStackTrace(); return new ResponseEntity(HttpStatus.INTERNAL_SERVER_ERROR); } } @RequestMapping("/del/people/man") @ResponseBody public ResponseEntity del(@RequestParam(name = "id") String id){ DeleteResponse result = client.prepareDelete("people", "man", id).get(); return new ResponseEntity(result.getResult().toString(), HttpStatus.OK); } @RequestMapping("/update/people/man") @ResponseBody public ResponseEntity update(@RequestParam(name = "id") String id, @RequestParam(name = "name", required = false) String name, @RequestParam(name = "age", required = false) Integer age, @RequestParam(name = "date", required = false) @DateTimeFormat(pattern = "yyyy-MM-dd HH:mm:ss") Date date, @RequestParam(name = "country", required = false) String country){ UpdateRequest update = new UpdateRequest("people", "man", id); try { XContentBuilder content = XContentFactory.jsonBuilder().startObject(); if(null != name){ content.field("name", name); } if(null != age){ content.field("age", age); } if(null != date){ content.field("date", date); } if(null != country){ content.field("country", country); } content.endObject(); update.doc(content); } catch (IOException e) { e.printStackTrace(); return new ResponseEntity(HttpStatus.INTERNAL_SERVER_ERROR); } try { UpdateResponse result = client.update(update).get(); return new ResponseEntity(result.getResult().toString(), HttpStatus.OK); } catch (Exception e) { e.printStackTrace(); return new ResponseEntity(HttpStatus.INTERNAL_SERVER_ERROR); } } @RequestMapping("/query/people/man") @ResponseBody public ResponseEntity query(@RequestParam(name = "name", required = false) String name, @RequestParam(name = "gt_age", defaultValue = "0") Integer minAge, @RequestParam(name = "lt_age", required = false) Integer maxAge){ BoolQueryBuilder boolQuery = QueryBuilders.boolQuery(); if(null != name){ boolQuery.must(QueryBuilders.matchQuery("name", name)); } RangeQueryBuilder rangeQuery = QueryBuilders.rangeQuery("age").from(minAge); if(null != maxAge && 0 < maxAge){ rangeQuery.to(maxAge); } boolQuery.filter(rangeQuery); SearchRequestBuilder builder = client.prepareSearch("people") .setTypes("man") .setSearchType(SearchType.DFS_QUERY_THEN_FETCH) .setQuery(boolQuery) .setFrom(8) .setSize(10); // System.out.println(builder); SearchResponse response = builder.get(); List<Map<String, Object>> result = new ArrayList<Map<String, Object>>(); for(SearchHit hit : response.getHits()){ result.add(hit.getSource()); } return new ResponseEntity(result, HttpStatus.OK); } }
Elasticsearch的基本用法(增删改查)
标签:
elasticsearch
================基础概念=======================
索引--含有相同属性的文档集合
类型--索引可以定义一个或者多个类型,文档必须属于一个类型
文档--文档是可以被索引的基础数据单位
分片--每个索引都有多个分片,每个分片是一个Lucence索引
备份--拷贝一份分片就完成了分片的备份
==============基本用法============================
RESTFul API
API基本格式 http://<ip>:<port>/<索引>/<类型>/<文档id>
常用HTTP动词 GET/PUT/POST/DELETE
=========索引===========
创建索引
1.非结构化创建
查看索引信息mappings为空{}
2.结构化创建
====利用elasticsearch-head创建
复合查询-
http://localhost:9200/
book/novel/_mappings
{
"novel": {
"properties": {
"title": {
"type": "text"
}
}
}
}
提交请求 post+易读+验证JSON
===利用postman创建people
http://localhost:9200/people+put方式+json(Body+raw+JSON)
{
"settings":{
"number_of_shards":3,
"number_of_replicas":1
},
"mappings":{
"man":{
"properties":{
"name":{
"type":"text"
},
"contury":{
"type":"keyword"
},
"age":{
"type":"integer"
},
"date":{
"type":"date",
"format":"yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
}
}
},
"woman":{
}
}
}
=========插入===========
1.指定文档id插入
http://localhost:9200/people/man/1 + put方式 + json(Body+raw+JSON)
{
"name":"action",
"country":"China",
"age":30,
"date":"1989-11-17"
}
2.自动产生文档id插入
http://localhost:9200/people/man + post方法 + json(Body+raw+JSON)
{
"name":"xingdong",
"country":"China",
"age":18,
"date":"1989-10-20"
}
=========修改===========
1.直接修改
http://localhost:9200/people/man/1/_update + post方法 + json(Body+raw+JSON)
{
"doc":{
"name":"who is action"
}
}
2.脚本方式修改
http://localhost:9200/people/man/1/_update + post方法 + json(Body+raw+JSON)
{
"script":{
"lang":"painless",
"inline":"ctx._source.age += 10"
}
}
或者
{
"script":{
"lang":"painless",
"inline":"ctx._source.age = params.age",
"params":{
"age":60
}
}
}
=========删除===========
1.删除文档
http://localhost:9200/people/man/1 + DELETE方法
2.删除索引
http://localhost:9200/people + DELETE方式
或者利用elasticsearch-head ->动作->删除
=========查询===========
1.简单查询
http://localhost:9200/people/man/1 + GET方法
2.条件查询
http://localhost:9200/people/_search + POST方法
2.1=====查询全部==========
{
"query":{
"match_all":{}
}
}
2.2===============
{
"query":{
"match_all":{}
},
"from":1,
"size":1
}
2.3=======匹配+排序=========
{
"query":{
"match":{
"name":"action"
}
},
"sort":[
{"age":{"order":"desc"}}
]
}
3.聚合查询
3.1============================
{
"aggs":{
"group_by_age":{
"terms":{
"field":"age"
}
},
"group_by_date":{
"terms":{
"field":"date"
}
}
}
}
3.2===========总数+最小+最大+平均年龄==================
{
"aggs":{
"group_by_age":{
"stats":{
"field":"age"
}
}
}
}
{
"aggs":{
"group_by_age":{
"min":{
"field":"age"
}
}
}
}
================高级查询=====================
=======子条件查询 特定字段查询所指特定值=======
=====1.Query Context=====
在查询过程中,除了判断文档是否满足查询条件外,ES还会计算一个_score来标识匹配的程度,旨在判断目标文档和查询条件匹配的有多好。
常用查询
全文本查询 针对文本类型数据
字段级别查询 针对结构化数据,如数字、日期等
1.===match_phrase分词匹配====
会匹配到 name为"who is action" 或者 "action is who" 的数据
{
"query":{
"match_phrase":{
"name":"who"
}
}
}
2.===multi_match==== 匹配name和country字段中有action的数据
{
"query":{
"multi_match":{
"query":"action",
"fields":["name","country"]
}
}
}
3.===query_string=== (OR和AND大写)
{
"query":{
"query_string":{
"query":"(action14 AND 13action) OR China",
"fields":["name","country"] //字段可以去掉
}
}
}
4.===term=====
{
"query":{
"term":{
"name":"action14"
}
}
}
5.====range===== 范围 gt>,gte>=,lt<,lte<=
{
"query":{
"range":{
"age":{
"gte":10,
"lte":12
}
}
},
"sort":[
{"age":{"order":"desc"}}
]
}
{
"query":{
"range":{
"date":{
"gte":"2019-01-01",
"lte":"now" //现在的日期
}
}
}
}
=====2.Filter Context=====
在查询过程中,只判断该文档是否满足条件,只有Yes或者No。
{
"query":{
"bool":{
"filter":{
"term":{
"name":"action"
}
}
}
}
}
===复合条件查询 以一定的逻辑组合子条件查询===
常用查询
1.固定分数查询(不支持单独的match,只支持filter)2.布尔查询3....more
====事例 固定分数====
1.==这个时候_score会有分数===
{
"query":{
"match":{
"name":"who"
}
}
}
2.==这个时候_score等于1===
{
"query":{
"constant_score":{
"filter":{
"term":{
"name":"action"
}
}
}
}
}
3.==这个时候_score等于2===
{
"query":{
"constant_score":{
"filter":{
"term":{
"name":"action"
}
},
"boost":2
}
}
}
====事例 布尔查询====
1.===should===
{
"query":{
"bool":{
"should":[
{
"match":{
"name":"China"
}
},
{
"match":{
"country":"China"
}
}
]
}
}
}
2.===must===
{
"query":{
"bool":{
"must":[
{
"match":{
"name":"China"
}
},
{
"match":{
"country":"China"
}
}
]
}
}
}
3.======
{
"query":{
"bool":{
"must":[
{
"match":{
"name":"China"
}
},
{
"match":{
"country":"China"
}
}
],
"filter":{
"term":{
"age":16
}
}
}
}
}
4.===must_not====
{
"query":{
"bool":{
"must_not":[
{
"term":{
"name":"action"
}
}
]
}
}
}
索引--含有相同属性的文档集合
类型--索引可以定义一个或者多个类型,文档必须属于一个类型
文档--文档是可以被索引的基础数据单位
分片--每个索引都有多个分片,每个分片是一个Lucence索引
备份--拷贝一份分片就完成了分片的备份
==============基本用法============================
RESTFul API
API基本格式 http://<ip>:<port>/<索引>/<类型>/<文档id>
常用HTTP动词 GET/PUT/POST/DELETE
=========索引===========
创建索引
1.非结构化创建
查看索引信息mappings为空{}
2.结构化创建
====利用elasticsearch-head创建
复合查询-
http://localhost:9200/
book/novel/_mappings
{
"novel": {
"properties": {
"title": {
"type": "text"
}
}
}
}
提交请求 post+易读+验证JSON
===利用postman创建people
http://localhost:9200/people+put方式+json(Body+raw+JSON)
{
"settings":{
"number_of_shards":3,
"number_of_replicas":1
},
"mappings":{
"man":{
"properties":{
"name":{
"type":"text"
},
"contury":{
"type":"keyword"
},
"age":{
"type":"integer"
},
"date":{
"type":"date",
"format":"yyyy-MM-dd HH:mm:ss||yyyy-MM-dd||epoch_millis"
}
}
},
"woman":{
}
}
}
=========插入===========
1.指定文档id插入
http://localhost:9200/people/man/1 + put方式 + json(Body+raw+JSON)
{
"name":"action",
"country":"China",
"age":30,
"date":"1989-11-17"
}
2.自动产生文档id插入
http://localhost:9200/people/man + post方法 + json(Body+raw+JSON)
{
"name":"xingdong",
"country":"China",
"age":18,
"date":"1989-10-20"
}
=========修改===========
1.直接修改
http://localhost:9200/people/man/1/_update + post方法 + json(Body+raw+JSON)
{
"doc":{
"name":"who is action"
}
}
2.脚本方式修改
http://localhost:9200/people/man/1/_update + post方法 + json(Body+raw+JSON)
{
"script":{
"lang":"painless",
"inline":"ctx._source.age += 10"
}
}
或者
{
"script":{
"lang":"painless",
"inline":"ctx._source.age = params.age",
"params":{
"age":60
}
}
}
=========删除===========
1.删除文档
http://localhost:9200/people/man/1 + DELETE方法
2.删除索引
http://localhost:9200/people + DELETE方式
或者利用elasticsearch-head ->动作->删除
=========查询===========
1.简单查询
http://localhost:9200/people/man/1 + GET方法
2.条件查询
http://localhost:9200/people/_search + POST方法
2.1=====查询全部==========
{
"query":{
"match_all":{}
}
}
2.2===============
{
"query":{
"match_all":{}
},
"from":1,
"size":1
}
2.3=======匹配+排序=========
{
"query":{
"match":{
"name":"action"
}
},
"sort":[
{"age":{"order":"desc"}}
]
}
3.聚合查询
3.1============================
{
"aggs":{
"group_by_age":{
"terms":{
"field":"age"
}
},
"group_by_date":{
"terms":{
"field":"date"
}
}
}
}
3.2===========总数+最小+最大+平均年龄==================
{
"aggs":{
"group_by_age":{
"stats":{
"field":"age"
}
}
}
}
{
"aggs":{
"group_by_age":{
"min":{
"field":"age"
}
}
}
}
================高级查询=====================
=======子条件查询 特定字段查询所指特定值=======
=====1.Query Context=====
在查询过程中,除了判断文档是否满足查询条件外,ES还会计算一个_score来标识匹配的程度,旨在判断目标文档和查询条件匹配的有多好。
常用查询
全文本查询 针对文本类型数据
字段级别查询 针对结构化数据,如数字、日期等
1.===match_phrase分词匹配====
会匹配到 name为"who is action" 或者 "action is who" 的数据
{
"query":{
"match_phrase":{
"name":"who"
}
}
}
2.===multi_match==== 匹配name和country字段中有action的数据
{
"query":{
"multi_match":{
"query":"action",
"fields":["name","country"]
}
}
}
3.===query_string=== (OR和AND大写)
{
"query":{
"query_string":{
"query":"(action14 AND 13action) OR China",
"fields":["name","country"] //字段可以去掉
}
}
}
4.===term=====
{
"query":{
"term":{
"name":"action14"
}
}
}
5.====range===== 范围 gt>,gte>=,lt<,lte<=
{
"query":{
"range":{
"age":{
"gte":10,
"lte":12
}
}
},
"sort":[
{"age":{"order":"desc"}}
]
}
{
"query":{
"range":{
"date":{
"gte":"2019-01-01",
"lte":"now" //现在的日期
}
}
}
}
=====2.Filter Context=====
在查询过程中,只判断该文档是否满足条件,只有Yes或者No。
{
"query":{
"bool":{
"filter":{
"term":{
"name":"action"
}
}
}
}
}
===复合条件查询 以一定的逻辑组合子条件查询===
常用查询
1.固定分数查询(不支持单独的match,只支持filter)2.布尔查询3....more
====事例 固定分数====
1.==这个时候_score会有分数===
{
"query":{
"match":{
"name":"who"
}
}
}
2.==这个时候_score等于1===
{
"query":{
"constant_score":{
"filter":{
"term":{
"name":"action"
}
}
}
}
}
3.==这个时候_score等于2===
{
"query":{
"constant_score":{
"filter":{
"term":{
"name":"action"
}
},
"boost":2
}
}
}
====事例 布尔查询====
1.===should===
{
"query":{
"bool":{
"should":[
{
"match":{
"name":"China"
}
},
{
"match":{
"country":"China"
}
}
]
}
}
}
2.===must===
{
"query":{
"bool":{
"must":[
{
"match":{
"name":"China"
}
},
{
"match":{
"country":"China"
}
}
]
}
}
}
3.======
{
"query":{
"bool":{
"must":[
{
"match":{
"name":"China"
}
},
{
"match":{
"country":"China"
}
}
],
"filter":{
"term":{
"age":16
}
}
}
}
}
4.===must_not====
{
"query":{
"bool":{
"must_not":[
{
"term":{
"name":"action"
}
}
]
}
}
}
Elasticsearch的单实例以及分布式安装
标签:
elasticsearch
============单实例安装============
下载地址
https://www.elastic.co/cn/downloads/elasticsearch
下载完解压后
Run bin/elasticsearch (or bin\elasticsearch.bat on Windows) bin/elasticsearch -d 后台启动
Run curl http://localhost:9200/ or Invoke-RestMethod http://localhost:9200 with PowerShell
============视图插件安装============
https://github.com/mobz/elasticsearch-head //视图
git clone git://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head
npm install
npm run start // npm run start &
这个时候访问 http://localhost:9100/ 显示未连接状态
修改elasticsearch/config/elasticsearch.yml
加入以下内容,注意格式
http.cors.enabled: true
http.cors.allow-origin: "*"
重新启动 elasticsearch 再访问 http://localhost:9100/ 显示连接(集群健康值)
===========分布式安装===================
修改elasticsearch/config/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"
cluster.name: action
node.name: master
node.master: true
network.host: 127.0.0.1
重启elasticsearch服务器,访问http://localhost:9100/查看服务是否正常
添加slave节点
新建slave_1和slave_2文件夹,把之前下载的程序分别解压到这两个目录
分别修改其配置文件
slave_1/config/elasticsearch.yml
加入以下内容
cluster.name: action
node.name: slave1
network.host: 127.0.0.1
http.port: 8000
discovery.zen.ping.unicast.hosts: ["127.0.0.1"]
slave_2/config/elasticsearch.yml
加入以下内容
cluster.name: action
node.name: slave2
network.host: 127.0.0.1
http.port: 8200
discovery.zen.ping.unicast.hosts: ["127.0.0.1"]
这个时候访问http://localhost:9100/可以看到三个实例(master、slave_1、slave_2)
下载地址
https://www.elastic.co/cn/downloads/elasticsearch
下载完解压后
Run bin/elasticsearch (or bin\elasticsearch.bat on Windows) bin/elasticsearch -d 后台启动
Run curl http://localhost:9200/ or Invoke-RestMethod http://localhost:9200 with PowerShell
============视图插件安装============
https://github.com/mobz/elasticsearch-head //视图
git clone git://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head
npm install
npm run start // npm run start &
这个时候访问 http://localhost:9100/ 显示未连接状态
修改elasticsearch/config/elasticsearch.yml
加入以下内容,注意格式
http.cors.enabled: true
http.cors.allow-origin: "*"
重新启动 elasticsearch 再访问 http://localhost:9100/ 显示连接(集群健康值)
===========分布式安装===================
修改elasticsearch/config/elasticsearch.yml
http.cors.enabled: true
http.cors.allow-origin: "*"
cluster.name: action
node.name: master
node.master: true
network.host: 127.0.0.1
重启elasticsearch服务器,访问http://localhost:9100/查看服务是否正常
添加slave节点
新建slave_1和slave_2文件夹,把之前下载的程序分别解压到这两个目录
分别修改其配置文件
slave_1/config/elasticsearch.yml
加入以下内容
cluster.name: action
node.name: slave1
network.host: 127.0.0.1
http.port: 8000
discovery.zen.ping.unicast.hosts: ["127.0.0.1"]
slave_2/config/elasticsearch.yml
加入以下内容
cluster.name: action
node.name: slave2
network.host: 127.0.0.1
http.port: 8200
discovery.zen.ping.unicast.hosts: ["127.0.0.1"]
这个时候访问http://localhost:9100/可以看到三个实例(master、slave_1、slave_2)
WINDOWS下elasticsearch-head创建索引后显示Unassigned/yellow问题
查询各种资料后说是硬盘剩余空间过少的原因,我的硬盘容量使用已经快大于90%了
通过这个链接可以查看
http://localhost:9200/_cat/allocation?v
1.调整硬盘利用率
http://localhost:9200/_cluster/setting
发送put请求,可以利用postman软件
{
"transient":{
"cluster.routing.allocation.disk.watermark.low": "90%"
}
}
2.或者更高数据保存路径
path.data: E:\es\data
path.logs: E:\es\logs
ps:我是把我D盘的空间清理了一下搞定的!!
原文地址:https://www.cnblogs.com/carryLess/p/9452000.html
通过这个链接可以查看
http://localhost:9200/_cat/allocation?v
1.调整硬盘利用率
http://localhost:9200/_cluster/setting
发送put请求,可以利用postman软件
{
"transient":{
"cluster.routing.allocation.disk.watermark.low": "90%"
}
}
2.或者更高数据保存路径
path.data: E:\es\data
path.logs: E:\es\logs
ps:我是把我D盘的空间清理了一下搞定的!!
原文地址:https://www.cnblogs.com/carryLess/p/9452000.html
mysql之group_concat()函数
1、功能:将group by产生的同一个分组中的值连接起来,返回一个字符串结果。
2、语法:group_concat( [distinct] 要连接的字段 [order by 排序字段 asc/desc ] [separator '分隔符'] )
说明:通过使用distinct可以排除重复值;如果希望对结果中的值进行排序,可以使用order by子句;separator是一个字符串值,缺省为一个逗号。
3、举例:
表 table_a ,字段 id name cid
表 table_b ,字段 id title
表 table_b 的 id 对应 table_a中的cid
select b.*,(select group_concat(a.name) from table_a a where a.cid = b.id) names from table_b b
select b.*,(select group_concat(a.name order by a.id desc separator '_') from table_a a where a.cid = b.id) names from table_b b