2016年3月18日14:24html
搞了三天,实在搞不定,可是想一想老师又要用它来接项目,把这个大一件事交给我,我又不得不尽全力,不过通过三天的无休止的专研,我真的尽力了,主要是没有太多的时间能给我了,我还要准备看书复习,找暑假的实习,只能暂时告一段落,如今把这三天的研究成果记录一下,以便往后有机会的话再继续。前端
首先贴几个连接:git
https://github.com/cmusatyalab/elijah-openstackgithub
https://github.com/cmusatyalab/elijah-cloudlet编程
http://hail.elijah.cs.cmu.edu/bootstrap
http://www.aboutyun.com/thread-13063-1-1.html后端
http://blog.sina.com.cn/s/blog_7643a1bf0102vhga.htmlapi
http://blog.csdn.net/bianer199/article/details/39687875服务器
主要是第一个连接,里面包含了在openstack中扩展Cloudlet的详细过程,还有源代码连接,可是步骤都是省略的,好多经过devstack安装openstack的过程,好多组件的安装没有提到,但这又是问题的关键。网络
如下是维基百科Cloudlet后用谷歌翻译的:
一朵云是一个流动性加强小规模的云数据中心是位于网络的边缘。 在云雾的主要目的是经过提供强大的计算资源,移动设备与更低的延迟支持资源密集型和互动的移动应用。 这是扩展今天的一种新的建筑元素的云计算基础设施。 它表明了一个3层层次结构的中间层:移动设备---朵云---云 。 一个一片云能够被看做是在一个盒子 ,其目标是使云更接近一个数据中心 。 该朵云一词最先由创造M. Satyanarayanan , 维克多巴尔 ,拉蒙·卡塞雷斯和奈杰尔·戴维斯, [1]和原型实现开发的卡耐基梅隆大学的一个研究项目。 [2]朵云的概念也被称为移动边缘计算, [3] [4 ]跟我云, [5]和手机微云。 [6]的云雾是露点计算的子应用程序的一个很好的例子[7]是在移动类设备应用范例。 露计算集成了云雾,微服务,边计算,有雾和云计算分布式的信息服务环境一块儿造成。
动机:
许多移动服务分割应用到前端客户端程序和一个后端服务器程序如下传统客户端-服务器模型 。 前端的移动应用卸载其因各类缘由后端服务器的功能,如加快处理。 云计算的出现,后端服务器一般承载在云数据中心 。 虽然使用了云数据中心提供了各类好处,例如可扩展性和弹性,其整合和centralizion引线向移动设备及其相关联的数据中心之间的大的分离。终端到终端的通讯则涉及高延迟,低带宽许多网络啤酒花和结果。
等待时间的缘由,一些新兴的移动应用须要云卸载基础设施,以接近所述移动装置,以实现低的响应时间。 [8]在理想的状况下,它仅仅是一个无线跳程。 例如,卸载基础设施能够位于一个蜂窝基站或它能够LAN链接到一组无线网络的多个基站。 此卸载基础设施的单个元素被称为cloudlets。 和cloudlets的整个集合被称为移动的计算,这是由欧洲电信标准协会(ETSI)开发的工业积极性。 [3]
Cloudlets旨在支持都是资源密集型和互动。移动应用加强现实应用程序使用头部跟踪系统须要小于16毫秒终端到终端的延迟。 [9] 云游戏远程渲染也须要低延迟和高带宽。 [10]可穿戴认知辅助系统结合了像设备谷歌眼镜与基于云的处理经过一个复杂的任务来引导用户。 这个将来的应用类型是由美国国家科学基金会2013研讨会对无线网络的将来发展方向的报告定性为“惊人的变革”。 [11]这些应用程序的实时用户交互的关键路径使用云资源。 所以,他们不能耐受超过几十毫秒的端至端的操做延迟。 苹果的Siri和谷歌载入其在云中执行计算密集型的语音识别,在这个新兴空间进一步例子。
有一个在云和云雾的要求显著重叠。 在这两个级别,也就是须要:(1)不受信任的用户级别计算之间的严格隔离; (二)认证,访问控制和计量机制; (三)对用户级计算的动态资源分配; 和,(d)支持在一个很是普遍的用户级别计算,以对他们的处理结构最少的限制,编程语言或操做系统的能力。 在云数据中心,这些要求是今天使用知足虚拟机 (VM)的抽象。 为它们在今天云计算中使用的相同的缘由,虚拟机被用做抽象为cloudlets。 同时,还有云和云雾之间的少数,但重要的差别化。
从被在其存储层启动现有的虚拟机图像优化的云数据中心不一样的是,cloudlets须要在他们的配置更加灵活。 它们与移动设备的关联是高度动态的,具备至关的流失,因为用户的移动性。 从很远的用户能够在云雾意外现身(例如,若是他刚下车国际航班),并尝试使用它的应用程序,如个性化语言的翻译。 对于该用户,他以前的提供延迟是可以使用的应用程序的影响可用性。 [12]
若是移动设备用户移动从他当前正在使用的一片云远,交互式响应会下降做为逻辑网络距离增长。 以解决用户的移动性的这种效果,须要在第一一片云的卸载服务被转移到第二一片云保持端至端网络质量。 [13]这相似于在云计算动态迁移,但在某种意义上大大不一样之处在于在VM区切换发生在广域网(WAN)。
因为一片云模型须要从新配置或硬件/软件的其余部署,它提供给激励部署一个系统的方式是重要的。 然而,它能够面对一个经典引导问题。 Cloudlets须要实际应用来鼓励朵云部署。 可是,开发商不能在很大程度上依赖于基础设施的一片云,直到它被普遍部署。 为了打破这一僵局,并引导了朵云部署,研究人员在卡耐基梅隆大学提出的OpenStack ++扩展的OpenStack利用其开放式的生态系统。 [2]的OpenStack ++提供了一组特定的云雾-API做为OpenStack的扩展。 [14]
原文是:
A cloudlet is a mobility-enhanced small-scale cloud datacenter that is located at the edge of the Internet. The main purpose of the cloudlet is supporting resource-intensive and interactive mobile applications by providing powerful computing resources to mobile devices with lower latency. It is a new architectural element that extends today’s cloud computing infrastructure. It represents the middle tier of a 3-tier hierarchy: mobile device --- cloudlet --- cloud. A cloudlet can be viewed as a data center in a box whose goal is to bring the cloud closer. The cloudlet term was first coined by M. Satyanarayanan, Victor Bahl, Ramón Cáceres, and Nigel Davies,[1] and a prototype implementation is developed by Carnegie Mellon University as a research project.[2] The concept of cloudlet is also known as mobile edge computing,[3][4] follow me cloud,[5] and mobile micro-cloud.[6] The cloudlet is a good example of sub-application of the Dew computing[7] paradigm that is applied on the mobile-like devices. Dew Computing integrates the cloudlet, micro service, edge computing, forming together with Fog and Cloud Computing the Distributed information service environment.
Many mobile services split the application into a front-end client program and a back-end server program following the traditional client-server model. The front-end mobile application offloads its functionality to the back-end servers for various reasons such as speeding up processing. With the advent of cloud computing, the back-end server is typically hosted at the cloud datacenter. Though the use of a cloud datacenter offers various benefits such as scalability and elasticity, its consolidation and centralizion lead to a large separation between a mobile device and its associated datacenter. End-to-end communication then involves many network hops and results in high latencies and low bandwidth.
For the reasons of latency, some emerging mobile applications require cloud offload infrastructure to be close to the mobile device to achieve low response time.[8] In the ideal case, it is just one wireless hop away. For example, the offload infrastructure could be located in a cellular base station or it could be LAN-connected to a set of Wi-Fi base stations. The individual elements of this offload infrastructure are referred to as cloudlets. And the entire collection of cloudlets is referred to as Mobile-edge Computing, which is an industry initiative created by the European Telecommunications Standards Institute (ETSI).[3]
Cloudlets aim to support mobile applications that are both resource-intensive and interactive. Augmented reality applications that use head-tracked systems require end-to-end latencies of less than 16 ms.[9] Cloud games with remote rendering also require low latencies and high bandwidth.[10] Wearable cognitive assistance system combines a device like Google Glass with cloud-based processing to guide a user through a complex task. This futuristic genre of applications is characterized as “astonishingly transformative” by the report of the 2013 NSF Workshop on Future Directions in Wireless Networking.[11] These applications use cloud resources in the critical path of real-time user interaction. Consequently, they cannot tolerate end-to-end operation latencies of more than a few tens of milliseconds. Apple Siri and Google Now which perform compute-intensive speech recognition in the cloud, are further examples in this emerging space.
There is significant overlap in the requirements for cloud and cloudlet. At both levels, there is the need for: (a) strong isolation between untrusted user-level computations; (b) mechanisms for authentication, access control, and metering; (c) dynamic resource allocation for user-level computations; and, (d) the ability to support a very wide range of user-level computations, with minimal restrictions on their process structure, programming languages or operating systems. At a cloud datacenter, these requirements are met today using the virtual machine (VM) abstraction. For the same reasons they are used in cloud computing today, VMs are used as abstraction for cloudlets. Meanwhile, there are a few but important differentiators between cloud and cloudlet.
Different from cloud data centers that are optimized for launching existing VM images in their storage tier, cloudlets need to be much more agile in their provisioning. Their association with mobile devices is highly dynamic, with considerable churn due to user mobility. A user from far away may unexpectedly show up at a cloudlet (e.g., if he just got off an international flight) and try to use it for an application such as a personalized language translator. For that user, the provisioning delay before he is able to use the application impacts usability.[12]
If a mobile device user moves away from the cloudlet he is currently using, interactive response will degrade as the logical network distance increases. To address this effect of user mobility, the offloaded services on the first cloudlet need to be transferred to the second cloudlet maintaining end-to-end network quality.[13] This resembles live migration in cloud computing, but differs considerably in a sense that the VM handoff happens in Wide Area Network (WAN).
Since the cloudlet model requires reconfiguration or additional deployment of hardware/software, it is important to provide a systematic way to incentivise the deployment. However, it can face a classic bootstrapping problem. Cloudlets need practical applications to incentivize cloudlet deployment. However, developers cannot heavily rely on cloudlet infrastructure until it is widely deployed. To break this deadlock and bootstrap the cloudlet deployment, researchers atCarnegie Mellon University proposed OpenStack++ that extends OpenStack to leverage its open ecosystem.[2] OpenStack++ provides a set of cloudlet-specific API as OpenStack extensions.[14]