作者:Herbert Yuan 邮箱
发布时间:2017-7-27 站点:Inside Linux Development

Peter Harris 

Hi levi,

In a coherent multi-core system then data is always kept in sync between the caches, provided the pages are marked as shared in the page tables, without any need for explict cache maintenance operations in the code. If you required explict cache maintenance then SMP operating systems just wouldn't work - it has to be transparent and automatic.

Barriers are a totally different aspect, not really related to cache coherency at all. ARM has a weakly ordered memory model - instructions in one thread are allowed to out of order complete with respect to each other in many cases, unless there is an address dependency. For example:

STR r0, [msg]
STR r0, [msg_written]

If msg and event are different addresses, then without a barrier we could write the msg_written flag into memory before writing the message itself. If another core polls that msg_written address, and tries to read msg then it will get the wrong data. Barriers are effectively restrictions on the instruction completion to ensure that things happen in the right order.

STR r0, [msg]
STR r0, [msg_written]

The DMB guarantees that the msg write is committed before the msg_written write is committed. Note that this is nothing to do with the cache visibility to other processors - that's all magically handled by the SMP hardware - we just need to guarantee that the two writes into the cache are ordered correctly to get the behaviour the programmer intended.



Memory barrier (DSB, DMB). Does they guarantee writing data on cache to memory?

Copyright © 2017-2021. Some Rights Reserved.