什么是docker-compose?
docker-compose是docker的容器编排工具。通过配置yml文件来compose 我们开发过程中所需要的服务。
ps: 虽然生产上大部分用的是k8s,但是我们自己开发的时候,可以用docker-compose 来快速的起服务,用什么起什么,极其方便。(至于安装,在这里不做过多的讲解)
先看下实例
docker-compose-mysql.yml
version: '3' #基于compose哪个版本定制,1已经废弃,目前2和3
services:
mysql:
container_name: mysql_dev # 给容器起个名
image: "mysql:5.7"
# https://hub.docker.com/_/mysql?tab=description
environment: # 设置环境变量,相当于docker run命令中的-e,boolean类型得用引号引起来
TZ: Asia/Shanghai
LANG: en_US.UTF-8
MYSQL_ROOT_PASSWORD: "123456" # root密码
# MYSQL_USER: 'yxkong' # 新增超级用户(直接访问MYSQL_DATABASE)
# MYSQL_PASS: '123456' # 新增超级用户的密码
MYSQL_DATABASE: demo # 初始化的数据库名称
restart: always # 容器退出以后的重启策略
ports: # 端口映射
- 3306:3306
volumes: # 数据卷挂载路径设置,将本机目录映射到容器目录
- "/Users/yxk/docker/mysql/my.cnf:/etc/mysql/my.cnf" # 本机的mysql配置文件
- "/Users/yxk/docker/mysql/data:/var/lib/mysql" # 数据目录挂载
- "/Users/yxk/docker/mysql/log/error.log:/var/log/mysql/error.log" # 异常日志输出
- "/Users/yxk/docker/mysql/init:/docker-entrypoint-initdb.d/" # 初始sql 支持几种格式
执行
sudo docker-compose -f docker-compose-mysql.yml up -d
-f:指定docker-compose.yml文件路径
-d:后台启动
常见配置说明
-
container_name:自定义容器名称
-
image:容器运行的镜像,可去https://hub.docker.com/搜索
-
cgroup_parent:为容器指定父的cgroup组,同时也继承了该cgroup的资源限制。
-
environment: 设置环境变量,相当于docker run命令中的-e,boolean类型的用引号引起来
-
command:覆盖容器启动的默认命令
-
depends_on:设置依赖关系,启动时:先启动依赖项,停止时:按依赖项倒序停止
-
deploy:部署策略(开发环境没必要关注,可以用来限制cpu和内存)
-
expose:暴露端口,但不映射到宿主机,只被连接的服务访问,可以范围
-
logging:服务的日志配置
-
restart:默认no,任何情况都不会重启;always,总是重启;on-failure,非正常退出时重启;unless-stopped 重视重启(排除docker守护进程启动时就停止的)
-
ulimits:设置ulimit
-
volumes:将本机目录或文件映射到容器
version: "3"
services:
redis:
container_name: redis_demo
image: redis
command: redis-server /etc/redis/redis.conf --requirepass 123456 --appendonly yes
environment:
TZ: Asia/Shanghai
LANG: en_US.UTF-8
volumes:
- "/Users/yxk/docker/redis/data:/data"
- "/Users/yxk/docker/redis/redis.conf:/etc/redis/redis.conf"
ports:
- 6380:6379
deploy:
mode:replicated # replicated:复制服务,复制指定服务到集群的机器上;global:全局服务,服务将部署至集群的每个节点。
replicas: 2 # replicated模式下的节点数量
labels:
description: "This redis service label"
resources: #配置服务可使用的资源限制
limits:
cpus: '0.50'
memory: 200M
reservations:
cpus: '0.25'
memory: 200M
restart_policy: # 重启策略
condition: on-failure # 可选 none、on-failure、any(默认:any)
delay: 2s # 延迟5秒重启,默认0
max_attempts: 2 # 最大重试测试,默认一直重试
window: 60s # 重启超时时间
rollback_config: # 回滚策略
parallelism: 1 # 一次回滚的容器,0,一下子全回滚
delay: 5s # 每个容器回滚的等待时间
failure_action: pause # 回滚失败pause,还有一个是continue
monitor: 10s # 更新后观察10s是否有异常
max_failure_ratio: 0 # 容忍的故障率
order: stop-first # 操作顺序 stop-first 串行回滚,start-first 并行回滚,默认stop-first
logging:
driver: json-file #json-file、syslog、none
options:
#syslog-address: "tcp://127.0.0.1:22" 使用syslog时,可以指定
max-size: "200k" # 单个文件大小为200k
max-file: "10" # 最多10个文件
ulimits:
nproc: 65535
nofile:
soft: 10240
hard: 10240
我们来看下docker-compose 命令
Usage:
docker-compose [-f <arg>...] [--profile <name>...] [options] [--] [COMMAND] [ARGS...]
docker-compose -h|--help
Options:
-f, --file FILE Specify an alternate compose file # 指定compose的yml文件
(default: docker-compose.yml)
-p, --project-name NAME Specify an alternate project name # 项目名称
(default: directory name)
--profile NAME Specify a profile to enable
-c, --context NAME Specify a context name
--verbose Show more output
--log-level LEVEL Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
--ansi (never|always|auto) Control when to print ANSI control characters
--no-ansi Do not print ANSI control characters (DEPRECATED)
-v, --version Print version and exit
-H, --host HOST Daemon socket to connect to
--tls Use TLS; implied by --tlsverify
--tlscacert CA_PATH Trust certs signed only by this CA
--tlscert CLIENT_CERT_PATH Path to TLS certificate file
--tlskey TLS_KEY_PATH Path to TLS key file
--tlsverify Use TLS and verify the remote
--skip-hostname-check Don't check the daemon's hostname against the
name specified in the client certificate
--project-directory PATH Specify an alternate working directory
(default: the path of the Compose file)
--compatibility If set, Compose will attempt to convert keys
in v3 files to their non-Swarm equivalent (DEPRECATED)
--env-file PATH Specify an alternate environment file
Commands:
build Build or rebuild services #构建镜像 docker-compose build --no-cache 无缓存构建
config Validate and view the Compose file #校验文件格式是否正确 docker-compose -f docker-compose.yml config
create Create services # 创建容器
down Stop and remove resources # 停止服务
events Receive real time events from containers
exec Execute a command in a running container # 执行
help Get help on a command
images List images # 列出镜像
kill Kill containers
logs View output from containers # 查看容器的log
pause Pause services #
port Print the public port for a port binding
ps List containers # 快速查看compose文件中的容器
pull Pull service images # 拉取镜像
push Push service images
restart Restart services # 重启服务
rm Remove stopped containers
run Run a one-off command
scale Set number of containers for a service
start Start services # 启动容器
stop Stop services # 停止容器
top Display the running processes
unpause Unpause services
up Create and start containers # 创建并启动容器 docker-compose -f docker-compose.yml up -d
version Show version information and quit
可以通过docker-compose 命令 --help查看具体的命令
yxkdeMacBook-Pro:doc yxk$ docker-compose ps --help
List containers.
Usage: ps [options] [--] [SERVICE...]
Options:
-q, --quiet Only display IDs
--services Display services
--filter KEY=VAL Filter services by a property
-a, --all Show all stopped containers (including those created by the run command)
安装常用的开发工具
包括:kafka、redis、eureka、mysql、nacos 等
docker-compose.yml 文件
version: '3'
services:
zookeeper:
container_name: zookeeper_dev
image: zookeeper:3.5.5
restart: always
hostname: zookeeper
ports:
- 2181:2181
volumes:
- "/etc/localtime:/etc/localtime"
- "/Users/yxk/docker/zookeeper/data:/data"
- "/Users/yxk/docker/zookeeper/log:/var/log/zookeeper"
kafka:
container_name: kafka_dev
image: wurstmeister/kafka:2.12-2.2.1
hostname: kafka
# 配置参数见:https://hub.docker.com/r/wurstmeister/kafka
restart: always
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
# KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ADVERTISED_PORT: 9092 # 端口
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://10.203.8.28:9092 #配置宿主机ip或容器和宿主机都能识别的ip ,这个配置是kafka发布到zk上的,供外部使用,默认使用容器的id,容器id和容器的ip会在hosts里做映射
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092 # 全网监听,定义kafka的服务监听
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CREATE_TOPICS: "zipkin:1:1,zipkin.log.info:1:1,zipkin.log.warn:1:1,zipkin.log.error:1:1"
KAFKA_MESSAGE_MAX_BYTES: 6000000
KAFKA_REPLICA_FETCH_MAX_BYTES: 6000000
KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS: 60000
KAFKA_NUM_PARTITIONS: 2
volumes:
- "/etc/localtime:/etc/localtime"
- "/Users/yxk/docker/kafka/docker.sock:/var/run/docker.sock"
- "/Users/yxk/docker/kafka/data:/kafka" # kafka数据
depends_on:
- zookeeper
kafka-manager:
container_name: kafka_manager_dev
image: sheepkiller/kafka-manager
restart: always
links:
- zookeeper
- kafka
environment:
ZK_HOSTS: zookeeper:2181
APPLICATION_SECRET: "yxkong"
KAFKA_MANAGER_AUTH_ENABLED: "true" # 开启kafka-manager权限校验
KAFKA_MANAGER_USERNAME: 5ycode # 登陆账户
KAFKA_MANAGER_PASSWORD: yxkong # 登陆密码
KM_ARGS: -Djava.net.preferIPv4Stack=true
ports:
- 9000:9000
depends_on:
- zookeeper
volumes:
- "/etc/localtime:/etc/localtime"
redis:
container_name: redis_dev
image: redis
restart: always
command: redis-server /etc/redis/redis.conf --requirepass 123456 --appendonly yes
environment:
TZ: Asia/Shanghai
LANG: en_US.UTF-8
volumes:
- "/etc/localtime:/etc/localtime"
- "/Users/yxk/docker/redis/data:/data"
- "/Users/yxk/docker/redis/redis.conf:/etc/redis/redis.conf"
ports:
- 6379:6379
mysql:
container_name: mysql_dev
image: "mysql:5.7"
# hostname: yxkong
# https://hub.docker.com/_/mysql?tab=description
environment:
TZ: Asia/Shanghai
LANG: en_US.UTF-8
MYSQL_ROOT_PASSWORD: "123456" # root密码
# MYSQL_USER: 'yxkong' # 新增超级用户(直接访问MYSQL_DATABASE)
# MYSQL_PASS: '123456' # 新增超级用户的密码
MYSQL_DATABASE: demo # 初始化的数据库名称,只能填写一个,多张库表的创建可以放到sql文件中
restart: always
ports:
- 3306:3306
volumes:
- "/etc/localtime:/etc/localtime"
- "/Users/yxk/docker/mysql/my.cnf:/etc/mysql/my.cnf"
- "/Users/yxk/docker/mysql/data:/var/lib/mysql"
- "/Users/yxk/docker/mysql/log/error.log:/var/log/mysql/error.log"
- "/Users/yxk/docker/mysql/init:/docker-entrypoint-initdb.d/"
# nacos:
# container_name: nacos_dev
# image: "nacos/nacos-server:v2.0.3"
# # env_file:
# # - ../nacos/nacos-standlone-mysql.env
# environment:
# - "/etc/localtime:/etc/localtime"
# - PREFER_HOST_MODE=hostname # 如果支持主机名可以使用hostname,否则使用ip,默认也是ip
# - MODE=standalone # 单机模式启动
## - SPRING_DATASOURCE_PLATFORM=mysql # 数据源平台 仅支持mysql或不保存empty
# # TODO 修改mysql连接信息
# - MYSQL_SERVICE_HOST=127.0.0.1 # 注:这里不能为`127.0.0.1`或`localhost`方式!!!
# - MYSQL_SERVICE_DB_NAME=nacos
# - MYSQL_SERVICE_PORT=3306
# - MYSQL_SERVICE_USER=root
# - MYSQL_SERVICE_PASSWORD=123456
# - MYSQL_SERVICE_DB_PARAM=characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useSSL=false
# # TODO 修改JVM调优参数
# - JVM_XMS=128m #-Xms default :2g
# - JVM_XMX=128m #-Xmx default :2g
# - JVM_XMN=64m #-Xmn default :1g
# - JVM_MS=32m #-XX:MetaspaceSize default :128m
# - JVM_MMS=32m #-XX:MaxMetaspaceSize default :320m
# - NACOS_DEBUG=n #是否开启远程debug,y/n,默认n
# - TOMCAT_ACCESSLOG_ENABLED=false #是否开始tomcat访问日志的记录,默认false
# restart: always
# ports:
# - "8848:8848"
# - "9848:9848"
# - "9555:9555"
# volumes:
# - "./logs:/home/nacos/logs/"
# - "./custom.properties:/home/nacos/init.d/custom.properties"
# - "./nacos/application.properties:/home/nacos/conf/application.properties"
# depends_on:
# - mysql
eureka-server:
image: 5ycode/eureka:0.1
container_name: eureka-server_dev
restart: always
environment:
- "/etc/localtime:/etc/localtime"
- JVM_XMS=312m
- JVM_XMX=312m
- REMOTE_DEBUG=y
ports:
- "8765:8765"
volumes:
- ./logs/eureka-server:/logs
ps:一定要注意网络,通过compose发布的,都是可以互联互通,外部应用访问,最好通过宿主机的ip来。
mysql的配置文件my.cnf
# The MySQL server
[mysqld]
# MySQL启动用户
user=mysql
#mysql服务端监听端口
port = 3306
# 创建新表时的设置默认引擎
default-storage-engine=INNODB
#设置默认字符
character-set-server=utf8
#mysql数据库存放目录
datadir = /var/lib/mysql
socket = /var/run/mysqld/mysqld.sock
#服务端pid进程文件,若丢失则重启Mysql重新生成,若重启失败,
#则可能由于mysqld进程未杀死,用pkill mysql后则能重启成功Mysql
pid-file =/var/run/mysqld/mysqld.pid
#指定错误日志目录
log-error=/var/log/mysql/error.log
# 指定服务器id(同一局域网内要唯一)
server_id = 22
# binlog文件(不要和数据库放一起)
#log-bin=/var/lib/binlog/
# binlog格式
# 1. STATEMENT:基于SQL语句的模式,binlog 数据量小,但是某些语句和函数在复制过程可能导致数据不一致甚至出错;
# 2. MIXED:混合模式,根据语句来选用是 STATEMENT 还是 ROW 模式;
# 3. ROW:基于行的模式,记录的是行的完整变化。安全,但 binlog 会比其他两种模式大很多;
binlog_format=ROW
# FULL:binlog记录每一行的完整变更 MINIMAL:只记录影响后的行
binlog_row_image=FULL
# 日志文件大小
max_binlog_size=256M
# 定义清除过期日志的时间(这里设置为7天)
expire_logs_days=7
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
explicit_defaults_for_timestamp=true
lower_case_table_names=1
# 定义mysql应该支持的sql语法,数据校验等!
sql_mode=STRICT_TRANS_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,NO_ENGINE_SUBSTITUTION
performance_schema_max_table_instances=400
table_definition_cache=400
table_open_cache=256
max_allowed_packet = 32M
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
#配置mysql的内存大小,一般数据库服务器的80%
innodb_buffer_pool_size = 128M
# MySQL绑定IP
#bind-address = 127.0.0.1
#设置最大链接数
max_connections=200
[mysql]
default-character-set=utf8
# The following options will be passed to all MySQL clients
[client]
# 设置mysql客户端默认字符集
default-character-set=utf8
redis.conf文件
直接 wget http://download.redis.io/redis-stable/redis.conf 或者用下面
# 开启保护模式后,需要 bind ip 或 设置密码
protected-mode yes
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
supervised no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 16
always-show-logo yes
# 900秒内,如果超过1个key被修改,则发起快照保存
save 900 1
# 300秒内,如果超过10个key被修改,则发起快照保存
save 300 10
# 60秒内,如果1万个key被修改,则发起快照保存
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir ./
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
# 设置密码 docker命令上已经设置了
# requirepass 123456
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
oom-score-adj no
oom-score-adj-values 0 200 800
appendonly no
appendfilename "appendonly.aof"
# 每次操作都会立即写入aof文件中
# appendfsync always
# 每秒持久化一次(默认配置)
appendfsync everysec
# 不主动进行同步操作,默认30s一次
# appendfsync no
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
安装过程中遇到的坑:
- Creating zookeeper_dev 或Starting zookeeper_dev 卡住,然后报UnixHTTPConnectionPool(host='localhost', port=None): Read timed out.
sudo docker-compose -f docker-compose.yml up -d
Password:
Starting redis_dev ...
Starting mysql_dev ...
Starting zookeeper_dev ...
ERROR: for redis_dev UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
ERROR: for mysql_dev UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
ERROR: for zookeeper_dev UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
ERROR: for redis UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
ERROR: for mysql UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
ERROR: for zookeeper UnixHTTPConnectionPool(host='localhost', port=None): Read timed out. (read timeout=60)
习惯想应该是某个配置,可以调节。
百度了下,将/etc/profile 配置了超时时间
export DOCKER_CLIENT_TIMEOUT=500
export COMPOSE_HTTP_TIMEOUT=500
还报这个错。
解决办法:重启docker 解决
常用的一些命令:
# 查看本地镜像
docker images
# 删除本地镜像
docker rmi 容器名
# 查看容器
docker ps
# 创建并运行容器
docker run -it --name 起的名 镜像名
# 查看容器日志
docker logs -f 容器ID
# 容器启动、重启、停止
docker start restart stop 容器名或id
# 进入容器并打开新的交互终端
docker exec -it 容器名 /bin/bash
# 常用组合命令
# 查询指定内容的images的id
docker images|grep zookeeper | awk '{print $3}'
# 删除指定image
docker images|grep zookeeper | awk '{print $3}'|xargs docker rmi
# 删除指定内容的容器
docker ps|grep mysql| awk '{print $1}'|xargs docker rm
# 删除所有容器
docker stop `docker ps -q -a` | xargs docker rm
# 查找容器IP地址
docker inspect 容器名或ID | grep "IPAddress"
# 查看zk中注册kafka信息
docker exec -it zookeeper_dev bash bin/zkCli.sh
# 查看brokers相关信息
ls /brokers/ids/1
get /brokers/ids/1