A Novel Cache Sharing Scheme and Implementation in Multi-Core Real-Time Linux System

Article Preview

Abstract:

Memory allocation and management of real-time task and sub real-time task have been becoming the hot problem that people concern all the time. Page coloring technology is a kind of practical method which can realize isolation of cache space. This article aimed at the problem of cache resource competition caused by shared memory on multi-core real-time system, to solve the collision of cache visit, put forward the scheme of cache space isolation based on page coloring technology and the realization method. The problem that data collide with the visit on cache can be finally avoided, by dividing the shared cache space.

You might also be interested in these eBooks

Info:

Periodical:

Pages:

542-546

Citation:

Online since:

June 2013

Export:

Price:

Permissions CCC:

Permissions PLS:

Сopyright:

© 2013 Trans Tech Publications Ltd. All Rights Reserved

Share:

Citation:

[1] Valente P, Lipari G. An upper bound to the lateness of soft real-time tasks scheduled by EDF on multiprocessors [A]. In Proceedings of the 26th IEEE Real-Time Systems Symposium [C], 2005, 311–320.

DOI: 10.1109/rtss.2005.8

Google Scholar

[2] Devi U C, Anderson J. Real-Time Systems, 2008, 38(2):133-189.

Google Scholar

[3] Calandrino J, Anderson J. Cache-aware real-time scheduling on multicore platforms: Heuristics and a case study [A]. In Proceedings of the 20th Euromicro Conference on Real-Time Systems [C], 2008, 299–308.

DOI: 10.1109/ecrts.2008.10

Google Scholar

[4] Blelloch G, Gibbons P. Effectively sharing a cache among threads [A]. In Proceedings of the Sixteenth ACM Symposium on Parallelism in Algorithms and Architectures [C], 2004, 235–244.

DOI: 10.1145/1007912.1007948

Google Scholar

[5] Fedorova A, Seltzer M, Small C, et al. Throughput-oriented scheduling on chip multithreading systems [R]. Technical Report TR-17-04, Division of Engineering and Applied Sciences, Harvard University, 2004.

Google Scholar

[6] Fedorova A, Seltzer M, Smith M. Cache-fair thread scheduling for multicore processors [R]. Technical Report TR-17-06, Division of Engineering and Applied Sciences, Harvard University, 2006.

Google Scholar

[7] Kim S, Chandra D, Solihin Y. Fair cache sharing and partitioning on a chip multiprocessor architecture [A]. In Proceedings of the 13th International Conference on Parallel Architecture and Compilation Techniques [C], 2004, 111–122.

DOI: 10.1109/pact.2004.1342546

Google Scholar

[8] Pellizzoni R, Bui B, Caccamo M, et al. Coscheduling of CPU and I/O transactions in COTS-based embedded systems [A]. In Proceedings of the 29th IEEE Real-Time Systems Symposium [C], 2008, 221–231.

DOI: 10.1109/rtss.2008.42

Google Scholar

[9] Peng L, Song J, Ge S, et al. Case studies: Memory behavior of multithreaded multimedia and AI applications [A]. In Proceedings of Seventh Workshop on Computer Architecture Evaluation using Commercial Workloads [C], 2004, 33–40.

Google Scholar

[10] Petoumenos P, Keramidas G, Zeffer H, et al. Modeling cache sharing on chip multiprocessor architectures [A]. In Proceedings of the IEEE International Symposium on Workload Characterization [C], 2006, 160–171.

DOI: 10.1109/iiswc.2006.302740

Google Scholar

[11] Ramaprasad H, Mueller F. Tightening the bounds on feasible preemptions Points [A]. IEEE Transactions on Embedded Computing Systems [C], 2006, 212-224.

DOI: 10.1145/1880050.1880063

Google Scholar

[12] Cho S, Jin L. Managing distributed, shared L2 caches through OS-level page allocation. Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, 2006, 455-468.

DOI: 10.1109/micro.2006.31

Google Scholar