The convergence of real-time embedded systems, wireless sensor networks, and machine learning, has fueled the rapid development of the Internet of Things (IoT), engendering new computational workloads and generating unprecedented amounts of streaming data. As a result, the computational infrastructure for IoT is facing challenges imposed by scalability, varying availability and performance, and heterogeneity. Highly concurrent event-driven architectures (EDAs) are one potential technological approach to building large-scale IoT systems since they naturally comprise concurrency, scheduling, and service decoupling and isolation. Moreover, serverless computing is an example of an EDA that has emerged as the next-generation event-driven system to address many of these challenges.
In addition, new tiered cloud architectures consisting of low-capability IoT devices, computing and storage resources sited "at the network edge", and public cloud resources provide the opportunity to optimize placement of computation and storage tasks to achieve the performance and reliability requirements of IoT applications. The "edge cloud" is a service-hosting technology (like a public cloud) but located at the network edge to enable IoT deployments to take advantage of the spatial locality to optimize resource utilization, reduce required wide-area bandwidth, reduce response latency, improve application fault resilience, and improve security. To maximize its benefits, an efficient scheduling system that intelligently places workloads across IoT, edge cloud, and private/public cloud resources is indispensable. In this thesis, we report our research on building a scalable, event-driven, geo-distributed intelligent scheduling system for heterogeneous IoT devices and applications at the network edge.
To achieve the goal, we investigate the efficacy of using a serverless computing platform for tuning machine learning applications in parallel. We also research the usage of serverless computing across the edge and private/public cloud deployments for intelligent scheduling. A third investigation focuses on controlling the system temperature of edge cloud resources by dynamic voltage and frequency scaling (DVFS) to prevent overheating in environments hostile to computational infrastructure. Our work contributes to the corpus of computation offloading research for cyber-physical systems (CPS) that pairs workloads and resources among an available pool of heterogeneous IoT devices, edge cloud resources, and private/public cloud resources.