Nebula-orchestartor能够跨数据中心进行容器资源的管理。传统的资源协调器例如YARN/Mesos/Kubernetes都是在数据中心内部使用,假设集群内经过高速网络链接,主要用于数据密集或计算密集的场合。对于IoT、边缘计算、雾计算、多数据中心互联等广域网应用来讲,面对传输延迟、低带宽、数据同步、容错恢复、服务路由、灾难备份等各类状况,须要开发新的解决方案。Nebula-orchestartor采用MongoDB进行数据存储,rabbitMQ做为数据通信,具备扩展到广域网资源调度的潜力,不过目前还在早期发展阶段。这里经过一个简单的实例,能够体验Nebula-orchestartor的操做模式和能力。linux
参见:nginx
了解新技术的最好办法是亲自动手去作一作。下面的教程使用docker-compose快速设置一个 Nebula 集群。要求docker & docker-compose预先安装,推荐使用Docker for Mac的最新版本。git
docker-compose up -d
(you might need to sudo su
first if you didn't set your user to be part of the docker group), don't worry if you see the worker-manager & api-manager restarting, it's because the mongo & rabbit containers aren't configured yet so they fail to connect to it建立database的user & schema,命令以下:github
docker exec -it mongo mongo use nebula db.createUser( { user: "nebula", pwd: "nebula", roles: [ "readWrite" ] } )
退出容器 (ctrl-d)。docker
建立 rabbitMQ的 user 和 vhost,命令以下:json
docker exec -it rabbit sh rabbitmqctl add_vhost nebula rabbitmqctl add_user nebula nebula rabbitmqctl set_permissions -p nebula nebula ".*" ".*" ".*"
退出容器 (ctrl-d)。api
如今Nebula cluster已经运行起来了, lets use Curl to create an nginx app to fill the "example" APP_NAME app that the worker-manager has set to manage:网络
curl -X POST \ http://127.0.0.1/api/apps/example \ -H 'authorization: Basic bmVidWxhOm5lYnVsYQ==' \ -H 'cache-control: no-cache' \ -H 'content-type: application/json' \ -d '{ "starting_ports": [{"81":"80"}], "containers_per": {"server": 1}, "env_vars": {}, "docker_image" : "nginx", "running": true, "volumes": ["/tmp:/tmp/1", "/var/tmp/:/var/tmp/1:ro"], "networks": ["nebula"], "privileged": false, "devices": [] }'
由于须要下载Nginx容器,若是之前机器上没有镜像,须要耐心等待.......app
可使用下面的命令查看进度(一直等待到结束通知):curl
docker logs -f work-manager
Either wait for the changes to catch (usually few seconds at most) or restart the worker-manager container, you now have your first nebula worker (try logging into 127.0.0.1:81 in your browser to see), because the network is internal in this tutorial you can only run more on the same machine (which kinda defeats the purpose) but after you deploy Nebula by following the install guide you can run as many workers as you need by having multiple servers running the same worker-manager container with the same envvars\config file.
视频教程: asciinema