Differences From Artifact [c6a5af4a20]:
- File
src/atomic.h
— part of check-in
[96a128f954]
at
2016-07-30 21:18:37
on branch trunk
— atomic.h: Improve memory barrier
Replace of_memory_read_barrier() and of_memory_write_barrier() - which
are quite unspecific - with of_memory_enter_barrier() and
of_memory_leave_barrier().Also add an assembly implementation for ARM and ARM64. (user: js, size: 20222) [annotate] [blame] [check-ins using]
To Artifact [23c4a049f4]:
- File
src/atomic.h
— part of check-in
[9feaa90358]
at
2016-07-30 21:46:24
on branch trunk
— of_memory_barrier(): Only use mfence on x86_64
This is only available on x86 with SSE2, while it's always available on
x86_64. However, checking if SSE2 is available here would be too slow,
therefore let the compiler decide what do do instead (which will depend
on the selected target CPU). (user: js, size: 20199) [annotate] [blame] [check-ins using]
︙ | ︙ | |||
968 969 970 971 972 973 974 | } static OF_INLINE void of_memory_barrier(void) { #if !defined(OF_HAVE_THREADS) /* nop */ | | | 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 | } static OF_INLINE void of_memory_barrier(void) { #if !defined(OF_HAVE_THREADS) /* nop */ #elif defined(OF_X86_64_ASM) __asm__ __volatile__ ( "mfence" ::: "memory" ); #elif defined(OF_POWERPC_ASM) __asm__ __volatile__ ( "sync" ::: "memory" ); |
︙ | ︙ |