Computing Systems for AI

Edge Computing & Mobile Devices

Research on Collaborative Inference for Heterogeneous Hardware Platforms
This research focuses on the application of edge computing technology in neural network inference on mobile terminal devices. It aims to address how to optimize energy consumption and ensure performance under limited resources, particularly in heterogeneous SoC architectures, by dynamically optimizing the use of different processors to achieve a balance between performance and energy consumption.
Resource Management and Optimization in Edge and Mobile Systems
This research focuses on resource management and optimization in edge computing and mobile devices. By proposing a full-link instrumentation performance bottleneck analysis framework, a reinforcement learning-based CPU/GPU frequency scaling algorithm, and a performance-power consumption-aware scheduling algorithm, it aims to address the challenge of achieving optimal performance and energy efficiency balance for mobile devices in resource-constrained environments.
Generalizability of reinforcement learning FM algorithms in edge device devices
This research focuses on the generalization problem of reinforcement learning frequency scaling algorithms in edge devices. By combining neural architecture search, meta-learning, and online training techniques, a highly adaptive frequency scaling solution is proposed, which aims to solve the problem of insufficient generalization ability of traditional algorithms in multiple scenarios and devices.
FirstPrevious1NextLast